00:00:00.002 Started by upstream project "autotest-nightly-lts" build number 2257 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3516 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.073 The recommended git tool is: git 00:00:00.073 using credential 00000000-0000-0000-0000-000000000002 00:00:00.075 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.110 Fetching changes from the remote Git repository 00:00:00.112 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.170 Using shallow fetch with depth 1 00:00:00.171 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.171 > git --version # timeout=10 00:00:00.225 > git --version # 'git version 2.39.2' 00:00:00.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.264 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.264 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.122 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.133 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.145 Checking out Revision 1913354106d3abc3c9aeb027a32277f58731b4dc (FETCH_HEAD) 00:00:06.145 > git config core.sparsecheckout # timeout=10 00:00:06.156 > git read-tree -mu HEAD # timeout=10 00:00:06.172 > git checkout -f 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=5 00:00:06.195 Commit message: "jenkins: update jenkins to 2.462.2 and update plugins to its latest versions" 00:00:06.195 > git rev-list --no-walk 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=10 00:00:06.303 [Pipeline] Start of Pipeline 00:00:06.322 [Pipeline] library 00:00:06.324 Loading library shm_lib@master 00:00:06.324 Library shm_lib@master is cached. Copying from home. 00:00:06.346 [Pipeline] node 00:00:06.356 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.358 [Pipeline] { 00:00:06.368 [Pipeline] catchError 00:00:06.370 [Pipeline] { 00:00:06.384 [Pipeline] wrap 00:00:06.394 [Pipeline] { 00:00:06.402 [Pipeline] stage 00:00:06.404 [Pipeline] { (Prologue) 00:00:06.622 [Pipeline] sh 00:00:06.908 + logger -p user.info -t JENKINS-CI 00:00:06.928 [Pipeline] echo 00:00:06.930 Node: WFP4 00:00:06.936 [Pipeline] sh 00:00:07.270 [Pipeline] setCustomBuildProperty 00:00:07.283 [Pipeline] echo 00:00:07.285 Cleanup processes 00:00:07.290 [Pipeline] sh 00:00:07.575 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.575 3849997 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.587 [Pipeline] sh 00:00:07.873 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.873 ++ grep -v 'sudo pgrep' 00:00:07.873 ++ awk '{print $1}' 00:00:07.873 + sudo kill -9 00:00:07.873 + true 00:00:07.887 [Pipeline] cleanWs 00:00:07.896 [WS-CLEANUP] Deleting project workspace... 00:00:07.896 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.903 [WS-CLEANUP] done 00:00:07.907 [Pipeline] setCustomBuildProperty 00:00:07.924 [Pipeline] sh 00:00:08.206 + sudo git config --global --replace-all safe.directory '*' 00:00:08.302 [Pipeline] httpRequest 00:00:09.010 [Pipeline] echo 00:00:09.012 Sorcerer 10.211.164.101 is alive 00:00:09.021 [Pipeline] retry 00:00:09.023 [Pipeline] { 00:00:09.037 [Pipeline] httpRequest 00:00:09.041 HttpMethod: GET 00:00:09.041 URL: http://10.211.164.101/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:09.042 Sending request to url: http://10.211.164.101/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:09.050 Response Code: HTTP/1.1 200 OK 00:00:09.050 Success: Status code 200 is in the accepted range: 200,404 00:00:09.050 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:12.245 [Pipeline] } 00:00:12.259 [Pipeline] // retry 00:00:12.265 [Pipeline] sh 00:00:12.550 + tar --no-same-owner -xf jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:12.566 [Pipeline] httpRequest 00:00:12.953 [Pipeline] echo 00:00:12.955 Sorcerer 10.211.164.101 is alive 00:00:12.966 [Pipeline] retry 00:00:12.968 [Pipeline] { 00:00:12.982 [Pipeline] httpRequest 00:00:12.987 HttpMethod: GET 00:00:12.987 URL: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:12.987 Sending request to url: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:13.010 Response Code: HTTP/1.1 200 OK 00:00:13.010 Success: Status code 200 is in the accepted range: 200,404 00:00:13.010 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:01:18.488 [Pipeline] } 00:01:18.505 [Pipeline] // retry 00:01:18.513 [Pipeline] sh 00:01:18.797 + tar --no-same-owner -xf spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:01:21.349 [Pipeline] sh 00:01:21.634 + git -C spdk log --oneline -n5 00:01:21.634 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:21.634 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:21.634 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:21.634 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:21.634 9469ea403 nvme/fio_plugin: add trim support 00:01:21.647 [Pipeline] } 00:01:21.661 [Pipeline] // stage 00:01:21.669 [Pipeline] stage 00:01:21.671 [Pipeline] { (Prepare) 00:01:21.687 [Pipeline] writeFile 00:01:21.703 [Pipeline] sh 00:01:21.988 + logger -p user.info -t JENKINS-CI 00:01:22.000 [Pipeline] sh 00:01:22.283 + logger -p user.info -t JENKINS-CI 00:01:22.294 [Pipeline] sh 00:01:22.577 + cat autorun-spdk.conf 00:01:22.577 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.577 SPDK_TEST_NVMF=1 00:01:22.577 SPDK_TEST_NVME_CLI=1 00:01:22.577 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.577 SPDK_TEST_NVMF_NICS=e810 00:01:22.577 SPDK_RUN_UBSAN=1 00:01:22.577 NET_TYPE=phy 00:01:22.584 RUN_NIGHTLY=1 00:01:22.589 [Pipeline] readFile 00:01:22.614 [Pipeline] withEnv 00:01:22.616 [Pipeline] { 00:01:22.629 [Pipeline] sh 00:01:22.915 + set -ex 00:01:22.915 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:22.915 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:22.915 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.915 ++ SPDK_TEST_NVMF=1 00:01:22.915 ++ SPDK_TEST_NVME_CLI=1 00:01:22.915 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.915 ++ SPDK_TEST_NVMF_NICS=e810 00:01:22.915 ++ SPDK_RUN_UBSAN=1 00:01:22.915 ++ NET_TYPE=phy 00:01:22.915 ++ RUN_NIGHTLY=1 00:01:22.915 + case $SPDK_TEST_NVMF_NICS in 00:01:22.915 + DRIVERS=ice 00:01:22.915 + [[ tcp == \r\d\m\a ]] 00:01:22.915 + [[ -n ice ]] 00:01:22.915 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:22.915 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:22.915 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:22.915 rmmod: ERROR: Module i40iw is not currently loaded 00:01:22.915 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:22.915 + true 00:01:22.915 + for D in $DRIVERS 00:01:22.915 + sudo modprobe ice 00:01:22.915 + exit 0 00:01:22.925 [Pipeline] } 00:01:22.942 [Pipeline] // withEnv 00:01:22.949 [Pipeline] } 00:01:22.964 [Pipeline] // stage 00:01:22.974 [Pipeline] catchError 00:01:22.976 [Pipeline] { 00:01:22.991 [Pipeline] timeout 00:01:22.992 Timeout set to expire in 1 hr 0 min 00:01:22.993 [Pipeline] { 00:01:23.007 [Pipeline] stage 00:01:23.009 [Pipeline] { (Tests) 00:01:23.022 [Pipeline] sh 00:01:23.308 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:23.308 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:23.308 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:23.308 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:23.308 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.308 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:23.308 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:23.308 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:23.308 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:23.308 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:23.308 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:23.308 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:23.308 + source /etc/os-release 00:01:23.308 ++ NAME='Fedora Linux' 00:01:23.308 ++ VERSION='39 (Cloud Edition)' 00:01:23.308 ++ ID=fedora 00:01:23.308 ++ VERSION_ID=39 00:01:23.308 ++ VERSION_CODENAME= 00:01:23.308 ++ PLATFORM_ID=platform:f39 00:01:23.308 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:23.308 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:23.308 ++ LOGO=fedora-logo-icon 00:01:23.308 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:23.308 ++ HOME_URL=https://fedoraproject.org/ 00:01:23.308 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:23.308 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:23.308 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:23.308 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:23.308 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:23.308 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:23.308 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:23.308 ++ SUPPORT_END=2024-11-12 00:01:23.308 ++ VARIANT='Cloud Edition' 00:01:23.308 ++ VARIANT_ID=cloud 00:01:23.308 + uname -a 00:01:23.308 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:23.308 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:25.848 Hugepages 00:01:25.848 node hugesize free / total 00:01:25.848 node0 1048576kB 0 / 0 00:01:25.848 node0 2048kB 0 / 0 00:01:25.848 node1 1048576kB 0 / 0 00:01:25.848 node1 2048kB 0 / 0 00:01:25.848 00:01:25.848 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:25.848 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:25.848 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:25.848 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:25.848 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:25.848 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:25.848 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:25.848 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:25.848 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:25.848 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:25.848 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:25.848 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:25.848 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:25.848 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:25.848 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:25.848 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:25.848 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:25.848 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:25.848 + rm -f /tmp/spdk-ld-path 00:01:25.848 + source autorun-spdk.conf 00:01:25.848 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.848 ++ SPDK_TEST_NVMF=1 00:01:25.848 ++ SPDK_TEST_NVME_CLI=1 00:01:25.848 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.848 ++ SPDK_TEST_NVMF_NICS=e810 00:01:25.848 ++ SPDK_RUN_UBSAN=1 00:01:25.848 ++ NET_TYPE=phy 00:01:25.848 ++ RUN_NIGHTLY=1 00:01:25.848 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:25.848 + [[ -n '' ]] 00:01:25.848 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.848 + for M in /var/spdk/build-*-manifest.txt 00:01:25.848 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:25.848 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:25.848 + for M in /var/spdk/build-*-manifest.txt 00:01:25.848 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:25.848 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:25.848 + for M in /var/spdk/build-*-manifest.txt 00:01:25.848 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:25.848 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:25.848 ++ uname 00:01:25.848 + [[ Linux == \L\i\n\u\x ]] 00:01:25.848 + sudo dmesg -T 00:01:25.849 + sudo dmesg --clear 00:01:25.849 + dmesg_pid=3850911 00:01:25.849 + [[ Fedora Linux == FreeBSD ]] 00:01:25.849 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.849 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.849 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:25.849 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:25.849 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:25.849 + [[ -x /usr/src/fio-static/fio ]] 00:01:25.849 + export FIO_BIN=/usr/src/fio-static/fio 00:01:25.849 + FIO_BIN=/usr/src/fio-static/fio 00:01:25.849 + sudo dmesg -Tw 00:01:25.849 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:25.849 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:25.849 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:25.849 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.849 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.849 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:25.849 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.849 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.849 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:25.849 Test configuration: 00:01:25.849 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.849 SPDK_TEST_NVMF=1 00:01:25.849 SPDK_TEST_NVME_CLI=1 00:01:25.849 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.849 SPDK_TEST_NVMF_NICS=e810 00:01:25.849 SPDK_RUN_UBSAN=1 00:01:25.849 NET_TYPE=phy 00:01:25.849 RUN_NIGHTLY=1 07:20:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:25.849 07:20:29 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:25.849 07:20:29 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:25.849 07:20:29 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:25.849 07:20:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.849 07:20:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.849 07:20:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.849 07:20:29 -- paths/export.sh@5 -- $ export PATH 00:01:25.849 07:20:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.849 07:20:29 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:25.849 07:20:29 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:25.849 07:20:29 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1728278429.XXXXXX 00:01:25.849 07:20:29 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1728278429.lgXcq6 00:01:25.849 07:20:29 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:25.849 07:20:29 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:25.849 07:20:29 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:25.849 07:20:29 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:25.849 07:20:29 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:25.849 07:20:29 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:25.849 07:20:29 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:25.849 07:20:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.849 07:20:29 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:25.849 07:20:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:25.849 07:20:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:25.849 07:20:29 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.849 07:20:29 -- spdk/autobuild.sh@16 -- $ date -u 00:01:25.849 Mon Oct 7 05:20:29 AM UTC 2024 00:01:25.849 07:20:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:25.849 LTS-66-g726a04d70 00:01:25.849 07:20:29 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:25.849 07:20:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:25.849 07:20:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:25.849 07:20:29 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:25.849 07:20:29 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:25.849 07:20:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.849 ************************************ 00:01:25.849 START TEST ubsan 00:01:25.849 ************************************ 00:01:25.849 07:20:29 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:25.849 using ubsan 00:01:25.849 00:01:25.849 real 0m0.000s 00:01:25.849 user 0m0.000s 00:01:25.849 sys 0m0.000s 00:01:25.849 07:20:29 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:25.849 07:20:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.849 ************************************ 00:01:25.849 END TEST ubsan 00:01:25.849 ************************************ 00:01:25.849 07:20:29 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:25.849 07:20:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:25.849 07:20:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:25.849 07:20:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:25.849 07:20:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:25.849 07:20:29 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:25.849 07:20:29 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:25.849 07:20:29 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:25.849 07:20:29 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:25.849 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:25.849 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:26.108 Using 'verbs' RDMA provider 00:01:38.901 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:48.893 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:49.153 Creating mk/config.mk...done. 00:01:49.153 Creating mk/cc.flags.mk...done. 00:01:49.153 Type 'make' to build. 00:01:49.153 07:20:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:49.153 07:20:53 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:49.153 07:20:53 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:49.153 07:20:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.153 ************************************ 00:01:49.153 START TEST make 00:01:49.153 ************************************ 00:01:49.153 07:20:53 -- common/autotest_common.sh@1104 -- $ make -j96 00:01:49.414 make[1]: Nothing to be done for 'all'. 00:01:57.542 The Meson build system 00:01:57.542 Version: 1.5.0 00:01:57.542 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:57.542 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:57.542 Build type: native build 00:01:57.542 Program cat found: YES (/usr/bin/cat) 00:01:57.542 Project name: DPDK 00:01:57.542 Project version: 23.11.0 00:01:57.543 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:57.543 C linker for the host machine: cc ld.bfd 2.40-14 00:01:57.543 Host machine cpu family: x86_64 00:01:57.543 Host machine cpu: x86_64 00:01:57.543 Message: ## Building in Developer Mode ## 00:01:57.543 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:57.543 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:57.543 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:57.543 Program python3 found: YES (/usr/bin/python3) 00:01:57.543 Program cat found: YES (/usr/bin/cat) 00:01:57.543 Compiler for C supports arguments -march=native: YES 00:01:57.543 Checking for size of "void *" : 8 00:01:57.543 Checking for size of "void *" : 8 (cached) 00:01:57.543 Library m found: YES 00:01:57.543 Library numa found: YES 00:01:57.543 Has header "numaif.h" : YES 00:01:57.543 Library fdt found: NO 00:01:57.543 Library execinfo found: NO 00:01:57.543 Has header "execinfo.h" : YES 00:01:57.543 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:57.543 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:57.543 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:57.543 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:57.543 Run-time dependency openssl found: YES 3.1.1 00:01:57.543 Run-time dependency libpcap found: YES 1.10.4 00:01:57.543 Has header "pcap.h" with dependency libpcap: YES 00:01:57.543 Compiler for C supports arguments -Wcast-qual: YES 00:01:57.543 Compiler for C supports arguments -Wdeprecated: YES 00:01:57.543 Compiler for C supports arguments -Wformat: YES 00:01:57.543 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:57.543 Compiler for C supports arguments -Wformat-security: NO 00:01:57.543 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:57.543 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:57.543 Compiler for C supports arguments -Wnested-externs: YES 00:01:57.543 Compiler for C supports arguments -Wold-style-definition: YES 00:01:57.543 Compiler for C supports arguments -Wpointer-arith: YES 00:01:57.543 Compiler for C supports arguments -Wsign-compare: YES 00:01:57.543 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:57.543 Compiler for C supports arguments -Wundef: YES 00:01:57.543 Compiler for C supports arguments -Wwrite-strings: YES 00:01:57.543 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:57.543 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:57.543 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:57.543 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:57.543 Program objdump found: YES (/usr/bin/objdump) 00:01:57.543 Compiler for C supports arguments -mavx512f: YES 00:01:57.543 Checking if "AVX512 checking" compiles: YES 00:01:57.543 Fetching value of define "__SSE4_2__" : 1 00:01:57.543 Fetching value of define "__AES__" : 1 00:01:57.543 Fetching value of define "__AVX__" : 1 00:01:57.543 Fetching value of define "__AVX2__" : 1 00:01:57.543 Fetching value of define "__AVX512BW__" : 1 00:01:57.543 Fetching value of define "__AVX512CD__" : 1 00:01:57.543 Fetching value of define "__AVX512DQ__" : 1 00:01:57.543 Fetching value of define "__AVX512F__" : 1 00:01:57.543 Fetching value of define "__AVX512VL__" : 1 00:01:57.543 Fetching value of define "__PCLMUL__" : 1 00:01:57.543 Fetching value of define "__RDRND__" : 1 00:01:57.543 Fetching value of define "__RDSEED__" : 1 00:01:57.543 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:57.543 Fetching value of define "__znver1__" : (undefined) 00:01:57.543 Fetching value of define "__znver2__" : (undefined) 00:01:57.543 Fetching value of define "__znver3__" : (undefined) 00:01:57.543 Fetching value of define "__znver4__" : (undefined) 00:01:57.543 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:57.543 Message: lib/log: Defining dependency "log" 00:01:57.543 Message: lib/kvargs: Defining dependency "kvargs" 00:01:57.543 Message: lib/telemetry: Defining dependency "telemetry" 00:01:57.543 Checking for function "getentropy" : NO 00:01:57.543 Message: lib/eal: Defining dependency "eal" 00:01:57.543 Message: lib/ring: Defining dependency "ring" 00:01:57.543 Message: lib/rcu: Defining dependency "rcu" 00:01:57.543 Message: lib/mempool: Defining dependency "mempool" 00:01:57.543 Message: lib/mbuf: Defining dependency "mbuf" 00:01:57.543 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:57.543 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:57.543 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:57.543 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:57.543 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:57.543 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:57.543 Compiler for C supports arguments -mpclmul: YES 00:01:57.543 Compiler for C supports arguments -maes: YES 00:01:57.543 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.543 Compiler for C supports arguments -mavx512bw: YES 00:01:57.543 Compiler for C supports arguments -mavx512dq: YES 00:01:57.543 Compiler for C supports arguments -mavx512vl: YES 00:01:57.543 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:57.543 Compiler for C supports arguments -mavx2: YES 00:01:57.543 Compiler for C supports arguments -mavx: YES 00:01:57.543 Message: lib/net: Defining dependency "net" 00:01:57.543 Message: lib/meter: Defining dependency "meter" 00:01:57.543 Message: lib/ethdev: Defining dependency "ethdev" 00:01:57.543 Message: lib/pci: Defining dependency "pci" 00:01:57.543 Message: lib/cmdline: Defining dependency "cmdline" 00:01:57.543 Message: lib/hash: Defining dependency "hash" 00:01:57.543 Message: lib/timer: Defining dependency "timer" 00:01:57.543 Message: lib/compressdev: Defining dependency "compressdev" 00:01:57.543 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:57.543 Message: lib/dmadev: Defining dependency "dmadev" 00:01:57.543 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:57.543 Message: lib/power: Defining dependency "power" 00:01:57.543 Message: lib/reorder: Defining dependency "reorder" 00:01:57.543 Message: lib/security: Defining dependency "security" 00:01:57.543 Has header "linux/userfaultfd.h" : YES 00:01:57.543 Has header "linux/vduse.h" : YES 00:01:57.543 Message: lib/vhost: Defining dependency "vhost" 00:01:57.543 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:57.543 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:57.543 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:57.543 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:57.543 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:57.543 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:57.543 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:57.543 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:57.543 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:57.543 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:57.543 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:57.543 Configuring doxy-api-html.conf using configuration 00:01:57.543 Configuring doxy-api-man.conf using configuration 00:01:57.543 Program mandb found: YES (/usr/bin/mandb) 00:01:57.543 Program sphinx-build found: NO 00:01:57.543 Configuring rte_build_config.h using configuration 00:01:57.543 Message: 00:01:57.543 ================= 00:01:57.543 Applications Enabled 00:01:57.543 ================= 00:01:57.543 00:01:57.543 apps: 00:01:57.543 00:01:57.543 00:01:57.543 Message: 00:01:57.543 ================= 00:01:57.543 Libraries Enabled 00:01:57.543 ================= 00:01:57.543 00:01:57.543 libs: 00:01:57.543 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:57.543 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:57.543 cryptodev, dmadev, power, reorder, security, vhost, 00:01:57.543 00:01:57.543 Message: 00:01:57.543 =============== 00:01:57.543 Drivers Enabled 00:01:57.543 =============== 00:01:57.543 00:01:57.543 common: 00:01:57.543 00:01:57.543 bus: 00:01:57.543 pci, vdev, 00:01:57.543 mempool: 00:01:57.543 ring, 00:01:57.543 dma: 00:01:57.543 00:01:57.543 net: 00:01:57.543 00:01:57.543 crypto: 00:01:57.543 00:01:57.543 compress: 00:01:57.543 00:01:57.543 vdpa: 00:01:57.543 00:01:57.543 00:01:57.543 Message: 00:01:57.543 ================= 00:01:57.543 Content Skipped 00:01:57.543 ================= 00:01:57.543 00:01:57.543 apps: 00:01:57.543 dumpcap: explicitly disabled via build config 00:01:57.543 graph: explicitly disabled via build config 00:01:57.543 pdump: explicitly disabled via build config 00:01:57.543 proc-info: explicitly disabled via build config 00:01:57.543 test-acl: explicitly disabled via build config 00:01:57.543 test-bbdev: explicitly disabled via build config 00:01:57.543 test-cmdline: explicitly disabled via build config 00:01:57.543 test-compress-perf: explicitly disabled via build config 00:01:57.543 test-crypto-perf: explicitly disabled via build config 00:01:57.543 test-dma-perf: explicitly disabled via build config 00:01:57.543 test-eventdev: explicitly disabled via build config 00:01:57.543 test-fib: explicitly disabled via build config 00:01:57.543 test-flow-perf: explicitly disabled via build config 00:01:57.543 test-gpudev: explicitly disabled via build config 00:01:57.543 test-mldev: explicitly disabled via build config 00:01:57.543 test-pipeline: explicitly disabled via build config 00:01:57.543 test-pmd: explicitly disabled via build config 00:01:57.543 test-regex: explicitly disabled via build config 00:01:57.543 test-sad: explicitly disabled via build config 00:01:57.543 test-security-perf: explicitly disabled via build config 00:01:57.543 00:01:57.543 libs: 00:01:57.543 metrics: explicitly disabled via build config 00:01:57.543 acl: explicitly disabled via build config 00:01:57.543 bbdev: explicitly disabled via build config 00:01:57.543 bitratestats: explicitly disabled via build config 00:01:57.543 bpf: explicitly disabled via build config 00:01:57.543 cfgfile: explicitly disabled via build config 00:01:57.543 distributor: explicitly disabled via build config 00:01:57.544 efd: explicitly disabled via build config 00:01:57.544 eventdev: explicitly disabled via build config 00:01:57.544 dispatcher: explicitly disabled via build config 00:01:57.544 gpudev: explicitly disabled via build config 00:01:57.544 gro: explicitly disabled via build config 00:01:57.544 gso: explicitly disabled via build config 00:01:57.544 ip_frag: explicitly disabled via build config 00:01:57.544 jobstats: explicitly disabled via build config 00:01:57.544 latencystats: explicitly disabled via build config 00:01:57.544 lpm: explicitly disabled via build config 00:01:57.544 member: explicitly disabled via build config 00:01:57.544 pcapng: explicitly disabled via build config 00:01:57.544 rawdev: explicitly disabled via build config 00:01:57.544 regexdev: explicitly disabled via build config 00:01:57.544 mldev: explicitly disabled via build config 00:01:57.544 rib: explicitly disabled via build config 00:01:57.544 sched: explicitly disabled via build config 00:01:57.544 stack: explicitly disabled via build config 00:01:57.544 ipsec: explicitly disabled via build config 00:01:57.544 pdcp: explicitly disabled via build config 00:01:57.544 fib: explicitly disabled via build config 00:01:57.544 port: explicitly disabled via build config 00:01:57.544 pdump: explicitly disabled via build config 00:01:57.544 table: explicitly disabled via build config 00:01:57.544 pipeline: explicitly disabled via build config 00:01:57.544 graph: explicitly disabled via build config 00:01:57.544 node: explicitly disabled via build config 00:01:57.544 00:01:57.544 drivers: 00:01:57.544 common/cpt: not in enabled drivers build config 00:01:57.544 common/dpaax: not in enabled drivers build config 00:01:57.544 common/iavf: not in enabled drivers build config 00:01:57.544 common/idpf: not in enabled drivers build config 00:01:57.544 common/mvep: not in enabled drivers build config 00:01:57.544 common/octeontx: not in enabled drivers build config 00:01:57.544 bus/auxiliary: not in enabled drivers build config 00:01:57.544 bus/cdx: not in enabled drivers build config 00:01:57.544 bus/dpaa: not in enabled drivers build config 00:01:57.544 bus/fslmc: not in enabled drivers build config 00:01:57.544 bus/ifpga: not in enabled drivers build config 00:01:57.544 bus/platform: not in enabled drivers build config 00:01:57.544 bus/vmbus: not in enabled drivers build config 00:01:57.544 common/cnxk: not in enabled drivers build config 00:01:57.544 common/mlx5: not in enabled drivers build config 00:01:57.544 common/nfp: not in enabled drivers build config 00:01:57.544 common/qat: not in enabled drivers build config 00:01:57.544 common/sfc_efx: not in enabled drivers build config 00:01:57.544 mempool/bucket: not in enabled drivers build config 00:01:57.544 mempool/cnxk: not in enabled drivers build config 00:01:57.544 mempool/dpaa: not in enabled drivers build config 00:01:57.544 mempool/dpaa2: not in enabled drivers build config 00:01:57.544 mempool/octeontx: not in enabled drivers build config 00:01:57.544 mempool/stack: not in enabled drivers build config 00:01:57.544 dma/cnxk: not in enabled drivers build config 00:01:57.544 dma/dpaa: not in enabled drivers build config 00:01:57.544 dma/dpaa2: not in enabled drivers build config 00:01:57.544 dma/hisilicon: not in enabled drivers build config 00:01:57.544 dma/idxd: not in enabled drivers build config 00:01:57.544 dma/ioat: not in enabled drivers build config 00:01:57.544 dma/skeleton: not in enabled drivers build config 00:01:57.544 net/af_packet: not in enabled drivers build config 00:01:57.544 net/af_xdp: not in enabled drivers build config 00:01:57.544 net/ark: not in enabled drivers build config 00:01:57.544 net/atlantic: not in enabled drivers build config 00:01:57.544 net/avp: not in enabled drivers build config 00:01:57.544 net/axgbe: not in enabled drivers build config 00:01:57.544 net/bnx2x: not in enabled drivers build config 00:01:57.544 net/bnxt: not in enabled drivers build config 00:01:57.544 net/bonding: not in enabled drivers build config 00:01:57.544 net/cnxk: not in enabled drivers build config 00:01:57.544 net/cpfl: not in enabled drivers build config 00:01:57.544 net/cxgbe: not in enabled drivers build config 00:01:57.544 net/dpaa: not in enabled drivers build config 00:01:57.544 net/dpaa2: not in enabled drivers build config 00:01:57.544 net/e1000: not in enabled drivers build config 00:01:57.544 net/ena: not in enabled drivers build config 00:01:57.544 net/enetc: not in enabled drivers build config 00:01:57.544 net/enetfec: not in enabled drivers build config 00:01:57.544 net/enic: not in enabled drivers build config 00:01:57.544 net/failsafe: not in enabled drivers build config 00:01:57.544 net/fm10k: not in enabled drivers build config 00:01:57.544 net/gve: not in enabled drivers build config 00:01:57.544 net/hinic: not in enabled drivers build config 00:01:57.544 net/hns3: not in enabled drivers build config 00:01:57.544 net/i40e: not in enabled drivers build config 00:01:57.544 net/iavf: not in enabled drivers build config 00:01:57.544 net/ice: not in enabled drivers build config 00:01:57.544 net/idpf: not in enabled drivers build config 00:01:57.544 net/igc: not in enabled drivers build config 00:01:57.544 net/ionic: not in enabled drivers build config 00:01:57.544 net/ipn3ke: not in enabled drivers build config 00:01:57.544 net/ixgbe: not in enabled drivers build config 00:01:57.544 net/mana: not in enabled drivers build config 00:01:57.544 net/memif: not in enabled drivers build config 00:01:57.544 net/mlx4: not in enabled drivers build config 00:01:57.544 net/mlx5: not in enabled drivers build config 00:01:57.544 net/mvneta: not in enabled drivers build config 00:01:57.544 net/mvpp2: not in enabled drivers build config 00:01:57.544 net/netvsc: not in enabled drivers build config 00:01:57.544 net/nfb: not in enabled drivers build config 00:01:57.544 net/nfp: not in enabled drivers build config 00:01:57.544 net/ngbe: not in enabled drivers build config 00:01:57.544 net/null: not in enabled drivers build config 00:01:57.544 net/octeontx: not in enabled drivers build config 00:01:57.544 net/octeon_ep: not in enabled drivers build config 00:01:57.544 net/pcap: not in enabled drivers build config 00:01:57.544 net/pfe: not in enabled drivers build config 00:01:57.544 net/qede: not in enabled drivers build config 00:01:57.544 net/ring: not in enabled drivers build config 00:01:57.544 net/sfc: not in enabled drivers build config 00:01:57.544 net/softnic: not in enabled drivers build config 00:01:57.544 net/tap: not in enabled drivers build config 00:01:57.544 net/thunderx: not in enabled drivers build config 00:01:57.544 net/txgbe: not in enabled drivers build config 00:01:57.544 net/vdev_netvsc: not in enabled drivers build config 00:01:57.544 net/vhost: not in enabled drivers build config 00:01:57.544 net/virtio: not in enabled drivers build config 00:01:57.544 net/vmxnet3: not in enabled drivers build config 00:01:57.544 raw/*: missing internal dependency, "rawdev" 00:01:57.544 crypto/armv8: not in enabled drivers build config 00:01:57.544 crypto/bcmfs: not in enabled drivers build config 00:01:57.544 crypto/caam_jr: not in enabled drivers build config 00:01:57.544 crypto/ccp: not in enabled drivers build config 00:01:57.544 crypto/cnxk: not in enabled drivers build config 00:01:57.544 crypto/dpaa_sec: not in enabled drivers build config 00:01:57.544 crypto/dpaa2_sec: not in enabled drivers build config 00:01:57.544 crypto/ipsec_mb: not in enabled drivers build config 00:01:57.544 crypto/mlx5: not in enabled drivers build config 00:01:57.544 crypto/mvsam: not in enabled drivers build config 00:01:57.544 crypto/nitrox: not in enabled drivers build config 00:01:57.544 crypto/null: not in enabled drivers build config 00:01:57.544 crypto/octeontx: not in enabled drivers build config 00:01:57.544 crypto/openssl: not in enabled drivers build config 00:01:57.544 crypto/scheduler: not in enabled drivers build config 00:01:57.544 crypto/uadk: not in enabled drivers build config 00:01:57.544 crypto/virtio: not in enabled drivers build config 00:01:57.544 compress/isal: not in enabled drivers build config 00:01:57.544 compress/mlx5: not in enabled drivers build config 00:01:57.544 compress/octeontx: not in enabled drivers build config 00:01:57.544 compress/zlib: not in enabled drivers build config 00:01:57.544 regex/*: missing internal dependency, "regexdev" 00:01:57.544 ml/*: missing internal dependency, "mldev" 00:01:57.544 vdpa/ifc: not in enabled drivers build config 00:01:57.544 vdpa/mlx5: not in enabled drivers build config 00:01:57.544 vdpa/nfp: not in enabled drivers build config 00:01:57.544 vdpa/sfc: not in enabled drivers build config 00:01:57.544 event/*: missing internal dependency, "eventdev" 00:01:57.544 baseband/*: missing internal dependency, "bbdev" 00:01:57.544 gpu/*: missing internal dependency, "gpudev" 00:01:57.544 00:01:57.544 00:01:57.544 Build targets in project: 85 00:01:57.544 00:01:57.544 DPDK 23.11.0 00:01:57.544 00:01:57.544 User defined options 00:01:57.544 buildtype : debug 00:01:57.544 default_library : shared 00:01:57.544 libdir : lib 00:01:57.544 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:57.544 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:57.544 c_link_args : 00:01:57.544 cpu_instruction_set: native 00:01:57.544 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:57.544 disable_libs : bbdev,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:57.544 enable_docs : false 00:01:57.544 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:57.544 enable_kmods : false 00:01:57.544 tests : false 00:01:57.544 00:01:57.544 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.810 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:57.810 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:57.810 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:57.810 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:57.810 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:57.810 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:57.810 [6/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:57.810 [7/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:57.810 [8/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:57.810 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:57.810 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:57.810 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:57.810 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:57.810 [13/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:58.071 [14/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:58.071 [15/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:58.071 [16/265] Linking static target lib/librte_kvargs.a 00:01:58.071 [17/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:58.071 [18/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:58.071 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:58.071 [20/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:58.071 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:58.071 [22/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:58.071 [23/265] Linking static target lib/librte_log.a 00:01:58.071 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:58.071 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:58.071 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:58.071 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:58.071 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:58.071 [29/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:58.071 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:58.071 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:58.071 [32/265] Linking static target lib/librte_pci.a 00:01:58.071 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:58.071 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:58.071 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:58.071 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:58.334 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:58.334 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:58.334 [39/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:58.334 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:58.334 [41/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:58.334 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:58.334 [43/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:58.334 [44/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:58.334 [45/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:58.334 [46/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:58.334 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:58.334 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:58.334 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:58.334 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:58.334 [51/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:58.334 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:58.334 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:58.334 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:58.334 [55/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:58.334 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:58.334 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:58.334 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:58.334 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:58.334 [60/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:58.334 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:58.334 [62/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:58.334 [63/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:58.595 [64/265] Linking static target lib/librte_ring.a 00:01:58.595 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:58.595 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:58.595 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:58.595 [68/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:58.595 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:58.595 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:58.595 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:58.595 [72/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:58.595 [73/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.595 [74/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:58.595 [75/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:58.595 [76/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:58.595 [77/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:58.595 [78/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:58.595 [79/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:58.595 [80/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:58.595 [81/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:58.595 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:58.595 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:58.595 [84/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:58.595 [85/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.595 [86/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:58.595 [87/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:58.595 [88/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:58.595 [89/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:58.595 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:58.595 [91/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:58.595 [92/265] Linking static target lib/librte_meter.a 00:01:58.595 [93/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:58.595 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:58.595 [95/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:58.595 [96/265] Linking static target lib/librte_cmdline.a 00:01:58.596 [97/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:58.596 [98/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:58.596 [99/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.596 [100/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:58.596 [101/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:58.596 [102/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:58.596 [103/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:58.596 [104/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:58.596 [105/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:58.596 [106/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:58.596 [107/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:58.596 [108/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:58.596 [109/265] Linking static target lib/librte_telemetry.a 00:01:58.596 [110/265] Linking static target lib/librte_rcu.a 00:01:58.596 [111/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:58.596 [112/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:58.596 [113/265] Linking static target lib/librte_net.a 00:01:58.596 [114/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.596 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:58.596 [116/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:58.596 [117/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:58.596 [118/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:58.596 [119/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:58.596 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:58.596 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:58.596 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:58.596 [123/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.596 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:58.856 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:58.856 [126/265] Linking static target lib/librte_eal.a 00:01:58.856 [127/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:58.856 [128/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:58.856 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:58.856 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:58.856 [131/265] Linking static target lib/librte_timer.a 00:01:58.856 [132/265] Linking static target lib/librte_mempool.a 00:01:58.856 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:58.856 [134/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.856 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:58.856 [136/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:58.856 [137/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:58.856 [138/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:58.856 [139/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:58.856 [140/265] Linking static target lib/librte_mbuf.a 00:01:58.856 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:58.856 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:58.856 [143/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:58.856 [144/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:58.856 [145/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.856 [146/265] Linking target lib/librte_log.so.24.0 00:01:58.856 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:58.856 [148/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:58.856 [149/265] Linking static target lib/librte_compressdev.a 00:01:58.856 [150/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.856 [151/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:58.856 [152/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:58.856 [153/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:58.856 [154/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:58.856 [155/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:58.856 [156/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:58.856 [157/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.856 [158/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.856 [159/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.856 [160/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:58.856 [161/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:58.856 [162/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:58.856 [163/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:58.856 [164/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.856 [165/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.856 [166/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:58.856 [167/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.856 [168/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.115 [169/265] Linking target lib/librte_kvargs.so.24.0 00:01:59.115 [170/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:59.115 [171/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:59.115 [172/265] Linking static target lib/librte_power.a 00:01:59.115 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:59.115 [174/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:59.115 [175/265] Linking static target lib/librte_dmadev.a 00:01:59.115 [176/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:59.115 [177/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:59.115 [178/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:59.115 [179/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.115 [180/265] Linking static target lib/librte_reorder.a 00:01:59.115 [181/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.115 [182/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:59.115 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:59.115 [184/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:59.115 [185/265] Linking target lib/librte_telemetry.so.24.0 00:01:59.115 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:59.115 [187/265] Linking static target lib/librte_hash.a 00:01:59.115 [188/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.115 [189/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.115 [190/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:59.115 [191/265] Linking static target drivers/librte_bus_vdev.a 00:01:59.115 [192/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:59.115 [193/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:59.115 [194/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:59.115 [195/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:59.115 [196/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:59.115 [197/265] Linking static target lib/librte_security.a 00:01:59.115 [198/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:59.115 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:59.373 [200/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:59.374 [201/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:59.374 [202/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.374 [203/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.374 [204/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:59.374 [205/265] Linking static target drivers/librte_mempool_ring.a 00:01:59.374 [206/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.374 [207/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.374 [208/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.374 [209/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:59.374 [210/265] Linking static target drivers/librte_bus_pci.a 00:01:59.374 [211/265] Linking static target lib/librte_cryptodev.a 00:01:59.374 [212/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.374 [213/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.632 [214/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.632 [215/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.632 [216/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.632 [217/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.632 [218/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:59.632 [219/265] Linking static target lib/librte_ethdev.a 00:01:59.632 [220/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.890 [221/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.890 [222/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.890 [223/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:00.148 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.085 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:01.085 [226/265] Linking static target lib/librte_vhost.a 00:02:01.085 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.986 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.172 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.549 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.549 [231/265] Linking target lib/librte_eal.so.24.0 00:02:08.549 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:08.549 [233/265] Linking target lib/librte_timer.so.24.0 00:02:08.807 [234/265] Linking target lib/librte_pci.so.24.0 00:02:08.807 [235/265] Linking target lib/librte_ring.so.24.0 00:02:08.807 [236/265] Linking target lib/librte_meter.so.24.0 00:02:08.807 [237/265] Linking target lib/librte_dmadev.so.24.0 00:02:08.807 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:08.807 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:08.807 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:08.807 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:08.807 [242/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:08.807 [243/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:08.807 [244/265] Linking target lib/librte_rcu.so.24.0 00:02:08.807 [245/265] Linking target lib/librte_mempool.so.24.0 00:02:08.807 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:09.065 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:09.065 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:09.065 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:09.065 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:09.065 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:09.324 [252/265] Linking target lib/librte_reorder.so.24.0 00:02:09.324 [253/265] Linking target lib/librte_net.so.24.0 00:02:09.324 [254/265] Linking target lib/librte_compressdev.so.24.0 00:02:09.324 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:09.324 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:09.324 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:09.324 [258/265] Linking target lib/librte_security.so.24.0 00:02:09.324 [259/265] Linking target lib/librte_hash.so.24.0 00:02:09.324 [260/265] Linking target lib/librte_cmdline.so.24.0 00:02:09.324 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:09.583 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:09.583 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:09.583 [264/265] Linking target lib/librte_power.so.24.0 00:02:09.583 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:09.583 INFO: autodetecting backend as ninja 00:02:09.583 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:10.518 CC lib/ut_mock/mock.o 00:02:10.518 CC lib/ut/ut.o 00:02:10.518 CC lib/log/log.o 00:02:10.518 CC lib/log/log_deprecated.o 00:02:10.518 CC lib/log/log_flags.o 00:02:10.518 LIB libspdk_ut_mock.a 00:02:10.518 LIB libspdk_ut.a 00:02:10.518 LIB libspdk_log.a 00:02:10.775 SO libspdk_ut_mock.so.5.0 00:02:10.775 SO libspdk_ut.so.1.0 00:02:10.775 SO libspdk_log.so.6.1 00:02:10.775 SYMLINK libspdk_ut_mock.so 00:02:10.775 SYMLINK libspdk_ut.so 00:02:10.775 SYMLINK libspdk_log.so 00:02:11.033 CXX lib/trace_parser/trace.o 00:02:11.033 CC lib/util/base64.o 00:02:11.033 CC lib/util/cpuset.o 00:02:11.033 CC lib/util/bit_array.o 00:02:11.033 CC lib/dma/dma.o 00:02:11.033 CC lib/util/crc32.o 00:02:11.033 CC lib/util/crc16.o 00:02:11.033 CC lib/util/crc32c.o 00:02:11.033 CC lib/util/crc32_ieee.o 00:02:11.033 CC lib/util/crc64.o 00:02:11.033 CC lib/util/dif.o 00:02:11.033 CC lib/util/fd.o 00:02:11.033 CC lib/util/file.o 00:02:11.033 CC lib/util/hexlify.o 00:02:11.033 CC lib/ioat/ioat.o 00:02:11.033 CC lib/util/iov.o 00:02:11.033 CC lib/util/math.o 00:02:11.033 CC lib/util/pipe.o 00:02:11.033 CC lib/util/strerror_tls.o 00:02:11.033 CC lib/util/string.o 00:02:11.033 CC lib/util/uuid.o 00:02:11.033 CC lib/util/fd_group.o 00:02:11.033 CC lib/util/xor.o 00:02:11.033 CC lib/util/zipf.o 00:02:11.033 CC lib/vfio_user/host/vfio_user.o 00:02:11.033 CC lib/vfio_user/host/vfio_user_pci.o 00:02:11.033 LIB libspdk_dma.a 00:02:11.033 SO libspdk_dma.so.3.0 00:02:11.033 SYMLINK libspdk_dma.so 00:02:11.292 LIB libspdk_ioat.a 00:02:11.292 SO libspdk_ioat.so.6.0 00:02:11.292 LIB libspdk_vfio_user.a 00:02:11.292 SYMLINK libspdk_ioat.so 00:02:11.292 SO libspdk_vfio_user.so.4.0 00:02:11.292 LIB libspdk_util.a 00:02:11.292 SYMLINK libspdk_vfio_user.so 00:02:11.292 SO libspdk_util.so.8.0 00:02:11.549 SYMLINK libspdk_util.so 00:02:11.549 LIB libspdk_trace_parser.a 00:02:11.549 SO libspdk_trace_parser.so.4.0 00:02:11.807 CC lib/vmd/vmd.o 00:02:11.807 CC lib/vmd/led.o 00:02:11.807 CC lib/idxd/idxd.o 00:02:11.807 CC lib/idxd/idxd_user.o 00:02:11.807 CC lib/idxd/idxd_kernel.o 00:02:11.807 CC lib/rdma/common.o 00:02:11.807 CC lib/conf/conf.o 00:02:11.808 CC lib/rdma/rdma_verbs.o 00:02:11.808 CC lib/env_dpdk/env.o 00:02:11.808 CC lib/json/json_parse.o 00:02:11.808 CC lib/json/json_util.o 00:02:11.808 CC lib/json/json_write.o 00:02:11.808 CC lib/env_dpdk/memory.o 00:02:11.808 CC lib/env_dpdk/pci.o 00:02:11.808 CC lib/env_dpdk/init.o 00:02:11.808 CC lib/env_dpdk/threads.o 00:02:11.808 CC lib/env_dpdk/pci_ioat.o 00:02:11.808 CC lib/env_dpdk/pci_vmd.o 00:02:11.808 CC lib/env_dpdk/pci_virtio.o 00:02:11.808 SYMLINK libspdk_trace_parser.so 00:02:11.808 CC lib/env_dpdk/pci_idxd.o 00:02:11.808 CC lib/env_dpdk/sigbus_handler.o 00:02:11.808 CC lib/env_dpdk/pci_event.o 00:02:11.808 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:11.808 CC lib/env_dpdk/pci_dpdk.o 00:02:11.808 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:11.808 LIB libspdk_conf.a 00:02:11.808 SO libspdk_conf.so.5.0 00:02:12.066 LIB libspdk_json.a 00:02:12.066 LIB libspdk_rdma.a 00:02:12.066 SYMLINK libspdk_conf.so 00:02:12.066 SO libspdk_json.so.5.1 00:02:12.066 SO libspdk_rdma.so.5.0 00:02:12.066 SYMLINK libspdk_json.so 00:02:12.066 SYMLINK libspdk_rdma.so 00:02:12.066 LIB libspdk_idxd.a 00:02:12.066 SO libspdk_idxd.so.11.0 00:02:12.066 LIB libspdk_vmd.a 00:02:12.324 CC lib/jsonrpc/jsonrpc_server.o 00:02:12.324 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:12.324 CC lib/jsonrpc/jsonrpc_client.o 00:02:12.324 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:12.324 SYMLINK libspdk_idxd.so 00:02:12.324 SO libspdk_vmd.so.5.0 00:02:12.324 SYMLINK libspdk_vmd.so 00:02:12.324 LIB libspdk_jsonrpc.a 00:02:12.582 SO libspdk_jsonrpc.so.5.1 00:02:12.582 SYMLINK libspdk_jsonrpc.so 00:02:12.582 LIB libspdk_env_dpdk.a 00:02:12.841 CC lib/rpc/rpc.o 00:02:12.841 SO libspdk_env_dpdk.so.13.0 00:02:12.841 SYMLINK libspdk_env_dpdk.so 00:02:12.841 LIB libspdk_rpc.a 00:02:12.841 SO libspdk_rpc.so.5.0 00:02:13.131 SYMLINK libspdk_rpc.so 00:02:13.131 CC lib/sock/sock.o 00:02:13.131 CC lib/sock/sock_rpc.o 00:02:13.131 CC lib/notify/notify.o 00:02:13.131 CC lib/notify/notify_rpc.o 00:02:13.131 CC lib/trace/trace.o 00:02:13.131 CC lib/trace/trace_flags.o 00:02:13.131 CC lib/trace/trace_rpc.o 00:02:13.389 LIB libspdk_notify.a 00:02:13.389 SO libspdk_notify.so.5.0 00:02:13.389 LIB libspdk_trace.a 00:02:13.389 SO libspdk_trace.so.9.0 00:02:13.389 SYMLINK libspdk_notify.so 00:02:13.389 LIB libspdk_sock.a 00:02:13.389 SYMLINK libspdk_trace.so 00:02:13.389 SO libspdk_sock.so.8.0 00:02:13.646 SYMLINK libspdk_sock.so 00:02:13.646 CC lib/thread/thread.o 00:02:13.646 CC lib/thread/iobuf.o 00:02:13.646 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:13.646 CC lib/nvme/nvme_ctrlr.o 00:02:13.646 CC lib/nvme/nvme_fabric.o 00:02:13.646 CC lib/nvme/nvme_ns_cmd.o 00:02:13.646 CC lib/nvme/nvme_ns.o 00:02:13.646 CC lib/nvme/nvme_pcie_common.o 00:02:13.646 CC lib/nvme/nvme_pcie.o 00:02:13.646 CC lib/nvme/nvme_qpair.o 00:02:13.646 CC lib/nvme/nvme.o 00:02:13.646 CC lib/nvme/nvme_quirks.o 00:02:13.646 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:13.646 CC lib/nvme/nvme_transport.o 00:02:13.646 CC lib/nvme/nvme_discovery.o 00:02:13.646 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:13.646 CC lib/nvme/nvme_tcp.o 00:02:13.646 CC lib/nvme/nvme_opal.o 00:02:13.646 CC lib/nvme/nvme_io_msg.o 00:02:13.646 CC lib/nvme/nvme_poll_group.o 00:02:13.646 CC lib/nvme/nvme_zns.o 00:02:13.646 CC lib/nvme/nvme_cuse.o 00:02:13.646 CC lib/nvme/nvme_vfio_user.o 00:02:13.646 CC lib/nvme/nvme_rdma.o 00:02:15.015 LIB libspdk_thread.a 00:02:15.015 SO libspdk_thread.so.9.0 00:02:15.015 SYMLINK libspdk_thread.so 00:02:15.015 CC lib/virtio/virtio.o 00:02:15.015 CC lib/virtio/virtio_vhost_user.o 00:02:15.015 CC lib/virtio/virtio_vfio_user.o 00:02:15.015 CC lib/virtio/virtio_pci.o 00:02:15.015 CC lib/accel/accel_rpc.o 00:02:15.015 CC lib/accel/accel.o 00:02:15.015 CC lib/accel/accel_sw.o 00:02:15.015 CC lib/blob/zeroes.o 00:02:15.015 CC lib/blob/blobstore.o 00:02:15.015 CC lib/init/json_config.o 00:02:15.015 CC lib/blob/request.o 00:02:15.015 CC lib/init/subsystem.o 00:02:15.015 CC lib/init/subsystem_rpc.o 00:02:15.015 CC lib/init/rpc.o 00:02:15.015 CC lib/blob/blob_bs_dev.o 00:02:15.272 LIB libspdk_init.a 00:02:15.272 LIB libspdk_virtio.a 00:02:15.272 LIB libspdk_nvme.a 00:02:15.272 SO libspdk_init.so.4.0 00:02:15.272 SO libspdk_virtio.so.6.0 00:02:15.272 SYMLINK libspdk_init.so 00:02:15.272 SO libspdk_nvme.so.12.0 00:02:15.272 SYMLINK libspdk_virtio.so 00:02:15.529 SYMLINK libspdk_nvme.so 00:02:15.529 CC lib/event/app.o 00:02:15.529 CC lib/event/reactor.o 00:02:15.529 CC lib/event/log_rpc.o 00:02:15.529 CC lib/event/scheduler_static.o 00:02:15.529 CC lib/event/app_rpc.o 00:02:15.787 LIB libspdk_accel.a 00:02:15.787 SO libspdk_accel.so.14.0 00:02:15.787 SYMLINK libspdk_accel.so 00:02:15.787 LIB libspdk_event.a 00:02:16.045 SO libspdk_event.so.12.0 00:02:16.045 SYMLINK libspdk_event.so 00:02:16.045 CC lib/bdev/bdev.o 00:02:16.045 CC lib/bdev/bdev_rpc.o 00:02:16.045 CC lib/bdev/part.o 00:02:16.045 CC lib/bdev/bdev_zone.o 00:02:16.045 CC lib/bdev/scsi_nvme.o 00:02:16.979 LIB libspdk_blob.a 00:02:16.979 SO libspdk_blob.so.10.1 00:02:16.979 SYMLINK libspdk_blob.so 00:02:17.238 CC lib/blobfs/blobfs.o 00:02:17.238 CC lib/blobfs/tree.o 00:02:17.238 CC lib/lvol/lvol.o 00:02:17.910 LIB libspdk_bdev.a 00:02:17.910 SO libspdk_bdev.so.14.0 00:02:17.910 LIB libspdk_blobfs.a 00:02:17.910 SO libspdk_blobfs.so.9.0 00:02:17.910 SYMLINK libspdk_bdev.so 00:02:17.910 LIB libspdk_lvol.a 00:02:17.910 SO libspdk_lvol.so.9.1 00:02:17.911 SYMLINK libspdk_blobfs.so 00:02:17.911 SYMLINK libspdk_lvol.so 00:02:18.184 CC lib/scsi/dev.o 00:02:18.184 CC lib/scsi/lun.o 00:02:18.184 CC lib/scsi/port.o 00:02:18.184 CC lib/scsi/scsi.o 00:02:18.184 CC lib/scsi/scsi_pr.o 00:02:18.184 CC lib/scsi/scsi_bdev.o 00:02:18.184 CC lib/scsi/scsi_rpc.o 00:02:18.184 CC lib/scsi/task.o 00:02:18.184 CC lib/ftl/ftl_core.o 00:02:18.184 CC lib/ftl/ftl_layout.o 00:02:18.184 CC lib/ftl/ftl_init.o 00:02:18.184 CC lib/ftl/ftl_io.o 00:02:18.184 CC lib/ftl/ftl_debug.o 00:02:18.184 CC lib/ftl/ftl_sb.o 00:02:18.184 CC lib/ftl/ftl_l2p.o 00:02:18.184 CC lib/ftl/ftl_l2p_flat.o 00:02:18.184 CC lib/ftl/ftl_nv_cache.o 00:02:18.184 CC lib/ftl/ftl_band.o 00:02:18.184 CC lib/ftl/ftl_band_ops.o 00:02:18.184 CC lib/ftl/ftl_writer.o 00:02:18.184 CC lib/ftl/ftl_reloc.o 00:02:18.184 CC lib/ftl/ftl_rq.o 00:02:18.184 CC lib/ftl/ftl_p2l.o 00:02:18.184 CC lib/ftl/ftl_l2p_cache.o 00:02:18.184 CC lib/ftl/mngt/ftl_mngt.o 00:02:18.184 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:18.184 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:18.184 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:18.184 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:18.184 CC lib/nvmf/ctrlr.o 00:02:18.184 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:18.184 CC lib/nvmf/ctrlr_discovery.o 00:02:18.184 CC lib/nvmf/ctrlr_bdev.o 00:02:18.184 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:18.184 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:18.184 CC lib/nvmf/subsystem.o 00:02:18.184 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:18.184 CC lib/ublk/ublk.o 00:02:18.184 CC lib/nvmf/nvmf.o 00:02:18.184 CC lib/nvmf/transport.o 00:02:18.184 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:18.184 CC lib/ublk/ublk_rpc.o 00:02:18.184 CC lib/nvmf/nvmf_rpc.o 00:02:18.184 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:18.184 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:18.184 CC lib/nvmf/tcp.o 00:02:18.184 CC lib/nvmf/rdma.o 00:02:18.184 CC lib/nbd/nbd.o 00:02:18.184 CC lib/nbd/nbd_rpc.o 00:02:18.184 CC lib/ftl/utils/ftl_conf.o 00:02:18.184 CC lib/ftl/utils/ftl_md.o 00:02:18.184 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:18.184 CC lib/ftl/utils/ftl_bitmap.o 00:02:18.184 CC lib/ftl/utils/ftl_mempool.o 00:02:18.184 CC lib/ftl/utils/ftl_property.o 00:02:18.184 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:18.184 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:18.184 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:18.184 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:18.184 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:18.184 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:18.184 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:18.184 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:18.184 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:18.184 CC lib/ftl/base/ftl_base_dev.o 00:02:18.184 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:18.184 CC lib/ftl/base/ftl_base_bdev.o 00:02:18.184 CC lib/ftl/ftl_trace.o 00:02:18.443 LIB libspdk_nbd.a 00:02:18.443 SO libspdk_nbd.so.6.0 00:02:18.702 SYMLINK libspdk_nbd.so 00:02:18.702 LIB libspdk_scsi.a 00:02:18.702 SO libspdk_scsi.so.8.0 00:02:18.702 SYMLINK libspdk_scsi.so 00:02:18.702 LIB libspdk_ublk.a 00:02:18.703 SO libspdk_ublk.so.2.0 00:02:18.961 SYMLINK libspdk_ublk.so 00:02:18.961 CC lib/iscsi/conn.o 00:02:18.961 CC lib/iscsi/init_grp.o 00:02:18.961 CC lib/iscsi/iscsi.o 00:02:18.961 CC lib/iscsi/md5.o 00:02:18.961 CC lib/iscsi/param.o 00:02:18.961 CC lib/iscsi/tgt_node.o 00:02:18.961 CC lib/iscsi/iscsi_subsystem.o 00:02:18.961 CC lib/iscsi/portal_grp.o 00:02:18.961 CC lib/iscsi/iscsi_rpc.o 00:02:18.961 CC lib/iscsi/task.o 00:02:18.961 CC lib/vhost/vhost.o 00:02:18.961 CC lib/vhost/vhost_rpc.o 00:02:18.961 CC lib/vhost/vhost_scsi.o 00:02:18.961 CC lib/vhost/vhost_blk.o 00:02:18.961 CC lib/vhost/rte_vhost_user.o 00:02:18.961 LIB libspdk_ftl.a 00:02:19.220 SO libspdk_ftl.so.8.0 00:02:19.480 SYMLINK libspdk_ftl.so 00:02:19.740 LIB libspdk_nvmf.a 00:02:19.740 LIB libspdk_vhost.a 00:02:19.740 SO libspdk_nvmf.so.17.0 00:02:19.740 SO libspdk_vhost.so.7.1 00:02:19.740 SYMLINK libspdk_vhost.so 00:02:19.999 SYMLINK libspdk_nvmf.so 00:02:19.999 LIB libspdk_iscsi.a 00:02:19.999 SO libspdk_iscsi.so.7.0 00:02:19.999 SYMLINK libspdk_iscsi.so 00:02:20.568 CC module/env_dpdk/env_dpdk_rpc.o 00:02:20.568 CC module/blob/bdev/blob_bdev.o 00:02:20.568 CC module/scheduler/gscheduler/gscheduler.o 00:02:20.568 CC module/accel/iaa/accel_iaa.o 00:02:20.568 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:20.568 CC module/accel/iaa/accel_iaa_rpc.o 00:02:20.568 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:20.568 CC module/sock/posix/posix.o 00:02:20.568 CC module/accel/ioat/accel_ioat.o 00:02:20.568 CC module/accel/error/accel_error.o 00:02:20.568 CC module/accel/ioat/accel_ioat_rpc.o 00:02:20.568 CC module/accel/dsa/accel_dsa.o 00:02:20.568 CC module/accel/error/accel_error_rpc.o 00:02:20.568 CC module/accel/dsa/accel_dsa_rpc.o 00:02:20.568 LIB libspdk_env_dpdk_rpc.a 00:02:20.568 SO libspdk_env_dpdk_rpc.so.5.0 00:02:20.568 SYMLINK libspdk_env_dpdk_rpc.so 00:02:20.568 LIB libspdk_scheduler_gscheduler.a 00:02:20.568 LIB libspdk_scheduler_dpdk_governor.a 00:02:20.568 LIB libspdk_accel_ioat.a 00:02:20.568 LIB libspdk_accel_error.a 00:02:20.568 SO libspdk_scheduler_gscheduler.so.3.0 00:02:20.568 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:20.828 LIB libspdk_scheduler_dynamic.a 00:02:20.828 SO libspdk_accel_ioat.so.5.0 00:02:20.828 LIB libspdk_accel_iaa.a 00:02:20.828 SO libspdk_accel_error.so.1.0 00:02:20.828 LIB libspdk_blob_bdev.a 00:02:20.828 LIB libspdk_accel_dsa.a 00:02:20.828 SO libspdk_scheduler_dynamic.so.3.0 00:02:20.828 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:20.828 SO libspdk_accel_iaa.so.2.0 00:02:20.828 SYMLINK libspdk_scheduler_gscheduler.so 00:02:20.828 SO libspdk_blob_bdev.so.10.1 00:02:20.828 SO libspdk_accel_dsa.so.4.0 00:02:20.828 SYMLINK libspdk_accel_ioat.so 00:02:20.828 SYMLINK libspdk_accel_error.so 00:02:20.828 SYMLINK libspdk_scheduler_dynamic.so 00:02:20.828 SYMLINK libspdk_accel_iaa.so 00:02:20.828 SYMLINK libspdk_blob_bdev.so 00:02:20.828 SYMLINK libspdk_accel_dsa.so 00:02:21.087 LIB libspdk_sock_posix.a 00:02:21.087 CC module/bdev/lvol/vbdev_lvol.o 00:02:21.087 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:21.087 CC module/bdev/raid/bdev_raid_rpc.o 00:02:21.087 CC module/bdev/error/vbdev_error_rpc.o 00:02:21.087 CC module/bdev/error/vbdev_error.o 00:02:21.087 CC module/bdev/raid/bdev_raid.o 00:02:21.087 CC module/bdev/raid/bdev_raid_sb.o 00:02:21.087 CC module/bdev/split/vbdev_split.o 00:02:21.087 CC module/bdev/split/vbdev_split_rpc.o 00:02:21.087 CC module/bdev/raid/raid0.o 00:02:21.087 CC module/bdev/iscsi/bdev_iscsi.o 00:02:21.087 CC module/bdev/malloc/bdev_malloc.o 00:02:21.087 CC module/bdev/raid/raid1.o 00:02:21.087 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:21.087 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:21.087 CC module/bdev/raid/concat.o 00:02:21.087 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:21.087 CC module/bdev/passthru/vbdev_passthru.o 00:02:21.087 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:21.087 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:21.087 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:21.087 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:21.087 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:21.087 CC module/bdev/gpt/gpt.o 00:02:21.087 CC module/bdev/null/bdev_null.o 00:02:21.087 CC module/bdev/gpt/vbdev_gpt.o 00:02:21.087 CC module/blobfs/bdev/blobfs_bdev.o 00:02:21.087 CC module/bdev/aio/bdev_aio.o 00:02:21.087 CC module/bdev/null/bdev_null_rpc.o 00:02:21.087 SO libspdk_sock_posix.so.5.0 00:02:21.087 CC module/bdev/delay/vbdev_delay.o 00:02:21.087 CC module/bdev/aio/bdev_aio_rpc.o 00:02:21.087 CC module/bdev/nvme/bdev_nvme.o 00:02:21.087 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:21.087 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:21.087 CC module/bdev/nvme/nvme_rpc.o 00:02:21.087 CC module/bdev/nvme/vbdev_opal.o 00:02:21.087 CC module/bdev/nvme/bdev_mdns_client.o 00:02:21.087 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:21.087 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:21.087 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:21.087 CC module/bdev/ftl/bdev_ftl.o 00:02:21.087 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:21.087 SYMLINK libspdk_sock_posix.so 00:02:21.346 LIB libspdk_blobfs_bdev.a 00:02:21.346 LIB libspdk_bdev_split.a 00:02:21.346 SO libspdk_blobfs_bdev.so.5.0 00:02:21.346 LIB libspdk_bdev_gpt.a 00:02:21.346 LIB libspdk_bdev_error.a 00:02:21.346 SO libspdk_bdev_split.so.5.0 00:02:21.346 SO libspdk_bdev_gpt.so.5.0 00:02:21.346 LIB libspdk_bdev_null.a 00:02:21.346 SO libspdk_bdev_error.so.5.0 00:02:21.346 SYMLINK libspdk_blobfs_bdev.so 00:02:21.346 LIB libspdk_bdev_ftl.a 00:02:21.346 LIB libspdk_bdev_passthru.a 00:02:21.346 SYMLINK libspdk_bdev_split.so 00:02:21.346 SO libspdk_bdev_null.so.5.0 00:02:21.346 SO libspdk_bdev_ftl.so.5.0 00:02:21.346 SYMLINK libspdk_bdev_gpt.so 00:02:21.604 SO libspdk_bdev_passthru.so.5.0 00:02:21.604 LIB libspdk_bdev_malloc.a 00:02:21.604 LIB libspdk_bdev_aio.a 00:02:21.604 SYMLINK libspdk_bdev_error.so 00:02:21.604 LIB libspdk_bdev_zone_block.a 00:02:21.604 LIB libspdk_bdev_iscsi.a 00:02:21.604 SYMLINK libspdk_bdev_null.so 00:02:21.604 SO libspdk_bdev_aio.so.5.0 00:02:21.604 SO libspdk_bdev_malloc.so.5.0 00:02:21.604 LIB libspdk_bdev_delay.a 00:02:21.604 SYMLINK libspdk_bdev_ftl.so 00:02:21.604 SO libspdk_bdev_zone_block.so.5.0 00:02:21.604 SYMLINK libspdk_bdev_passthru.so 00:02:21.604 SO libspdk_bdev_iscsi.so.5.0 00:02:21.604 SO libspdk_bdev_delay.so.5.0 00:02:21.604 LIB libspdk_bdev_lvol.a 00:02:21.604 SYMLINK libspdk_bdev_aio.so 00:02:21.604 SYMLINK libspdk_bdev_malloc.so 00:02:21.604 SO libspdk_bdev_lvol.so.5.0 00:02:21.604 SYMLINK libspdk_bdev_zone_block.so 00:02:21.604 SYMLINK libspdk_bdev_delay.so 00:02:21.604 SYMLINK libspdk_bdev_iscsi.so 00:02:21.604 LIB libspdk_bdev_virtio.a 00:02:21.604 SYMLINK libspdk_bdev_lvol.so 00:02:21.604 SO libspdk_bdev_virtio.so.5.0 00:02:21.604 SYMLINK libspdk_bdev_virtio.so 00:02:21.863 LIB libspdk_bdev_raid.a 00:02:21.863 SO libspdk_bdev_raid.so.5.0 00:02:21.863 SYMLINK libspdk_bdev_raid.so 00:02:22.800 LIB libspdk_bdev_nvme.a 00:02:22.800 SO libspdk_bdev_nvme.so.6.0 00:02:22.800 SYMLINK libspdk_bdev_nvme.so 00:02:23.369 CC module/event/subsystems/sock/sock.o 00:02:23.369 CC module/event/subsystems/vmd/vmd.o 00:02:23.369 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:23.369 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:23.369 CC module/event/subsystems/iobuf/iobuf.o 00:02:23.369 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:23.369 CC module/event/subsystems/scheduler/scheduler.o 00:02:23.369 LIB libspdk_event_sock.a 00:02:23.369 SO libspdk_event_sock.so.4.0 00:02:23.369 LIB libspdk_event_vhost_blk.a 00:02:23.369 LIB libspdk_event_vmd.a 00:02:23.369 LIB libspdk_event_scheduler.a 00:02:23.369 LIB libspdk_event_iobuf.a 00:02:23.369 SO libspdk_event_iobuf.so.2.0 00:02:23.369 SO libspdk_event_vmd.so.5.0 00:02:23.369 SO libspdk_event_vhost_blk.so.2.0 00:02:23.369 SO libspdk_event_scheduler.so.3.0 00:02:23.369 SYMLINK libspdk_event_sock.so 00:02:23.369 SYMLINK libspdk_event_vhost_blk.so 00:02:23.369 SYMLINK libspdk_event_iobuf.so 00:02:23.369 SYMLINK libspdk_event_vmd.so 00:02:23.369 SYMLINK libspdk_event_scheduler.so 00:02:23.628 CC module/event/subsystems/accel/accel.o 00:02:23.888 LIB libspdk_event_accel.a 00:02:23.888 SO libspdk_event_accel.so.5.0 00:02:23.888 SYMLINK libspdk_event_accel.so 00:02:24.148 CC module/event/subsystems/bdev/bdev.o 00:02:24.148 LIB libspdk_event_bdev.a 00:02:24.148 SO libspdk_event_bdev.so.5.0 00:02:24.148 SYMLINK libspdk_event_bdev.so 00:02:24.407 CC module/event/subsystems/nbd/nbd.o 00:02:24.407 CC module/event/subsystems/scsi/scsi.o 00:02:24.407 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:24.407 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:24.407 CC module/event/subsystems/ublk/ublk.o 00:02:24.665 LIB libspdk_event_nbd.a 00:02:24.665 LIB libspdk_event_ublk.a 00:02:24.665 LIB libspdk_event_scsi.a 00:02:24.665 SO libspdk_event_nbd.so.5.0 00:02:24.665 SO libspdk_event_ublk.so.2.0 00:02:24.665 SO libspdk_event_scsi.so.5.0 00:02:24.665 LIB libspdk_event_nvmf.a 00:02:24.665 SYMLINK libspdk_event_nbd.so 00:02:24.665 SO libspdk_event_nvmf.so.5.0 00:02:24.665 SYMLINK libspdk_event_ublk.so 00:02:24.665 SYMLINK libspdk_event_scsi.so 00:02:24.665 SYMLINK libspdk_event_nvmf.so 00:02:24.924 CC module/event/subsystems/iscsi/iscsi.o 00:02:24.924 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:24.924 LIB libspdk_event_vhost_scsi.a 00:02:24.924 LIB libspdk_event_iscsi.a 00:02:25.183 SO libspdk_event_vhost_scsi.so.2.0 00:02:25.183 SO libspdk_event_iscsi.so.5.0 00:02:25.183 SYMLINK libspdk_event_vhost_scsi.so 00:02:25.183 SYMLINK libspdk_event_iscsi.so 00:02:25.183 SO libspdk.so.5.0 00:02:25.183 SYMLINK libspdk.so 00:02:25.444 CXX app/trace/trace.o 00:02:25.444 TEST_HEADER include/spdk/accel.h 00:02:25.444 CC app/trace_record/trace_record.o 00:02:25.444 TEST_HEADER include/spdk/assert.h 00:02:25.444 TEST_HEADER include/spdk/accel_module.h 00:02:25.444 TEST_HEADER include/spdk/base64.h 00:02:25.444 TEST_HEADER include/spdk/barrier.h 00:02:25.444 TEST_HEADER include/spdk/bdev_module.h 00:02:25.444 TEST_HEADER include/spdk/bdev_zone.h 00:02:25.444 TEST_HEADER include/spdk/bdev.h 00:02:25.444 CC test/rpc_client/rpc_client_test.o 00:02:25.444 TEST_HEADER include/spdk/bit_pool.h 00:02:25.444 TEST_HEADER include/spdk/bit_array.h 00:02:25.444 CC app/spdk_nvme_identify/identify.o 00:02:25.444 CC app/spdk_nvme_perf/perf.o 00:02:25.444 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:25.444 TEST_HEADER include/spdk/blob_bdev.h 00:02:25.444 TEST_HEADER include/spdk/blobfs.h 00:02:25.444 CC app/spdk_lspci/spdk_lspci.o 00:02:25.444 TEST_HEADER include/spdk/blob.h 00:02:25.444 TEST_HEADER include/spdk/conf.h 00:02:25.444 CC app/spdk_top/spdk_top.o 00:02:25.444 TEST_HEADER include/spdk/config.h 00:02:25.444 TEST_HEADER include/spdk/cpuset.h 00:02:25.444 TEST_HEADER include/spdk/crc16.h 00:02:25.444 TEST_HEADER include/spdk/crc32.h 00:02:25.444 TEST_HEADER include/spdk/crc64.h 00:02:25.444 TEST_HEADER include/spdk/dif.h 00:02:25.444 TEST_HEADER include/spdk/dma.h 00:02:25.444 CC app/spdk_nvme_discover/discovery_aer.o 00:02:25.444 TEST_HEADER include/spdk/endian.h 00:02:25.444 TEST_HEADER include/spdk/env_dpdk.h 00:02:25.444 TEST_HEADER include/spdk/env.h 00:02:25.444 TEST_HEADER include/spdk/event.h 00:02:25.444 TEST_HEADER include/spdk/fd_group.h 00:02:25.444 TEST_HEADER include/spdk/fd.h 00:02:25.444 TEST_HEADER include/spdk/ftl.h 00:02:25.444 TEST_HEADER include/spdk/file.h 00:02:25.444 TEST_HEADER include/spdk/gpt_spec.h 00:02:25.444 TEST_HEADER include/spdk/hexlify.h 00:02:25.444 TEST_HEADER include/spdk/histogram_data.h 00:02:25.444 TEST_HEADER include/spdk/idxd.h 00:02:25.444 TEST_HEADER include/spdk/idxd_spec.h 00:02:25.444 TEST_HEADER include/spdk/init.h 00:02:25.444 TEST_HEADER include/spdk/ioat.h 00:02:25.444 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:25.444 TEST_HEADER include/spdk/ioat_spec.h 00:02:25.444 TEST_HEADER include/spdk/iscsi_spec.h 00:02:25.444 TEST_HEADER include/spdk/json.h 00:02:25.444 TEST_HEADER include/spdk/jsonrpc.h 00:02:25.444 TEST_HEADER include/spdk/likely.h 00:02:25.444 TEST_HEADER include/spdk/lvol.h 00:02:25.444 TEST_HEADER include/spdk/log.h 00:02:25.444 TEST_HEADER include/spdk/memory.h 00:02:25.444 TEST_HEADER include/spdk/mmio.h 00:02:25.444 CC app/nvmf_tgt/nvmf_main.o 00:02:25.444 CC app/iscsi_tgt/iscsi_tgt.o 00:02:25.444 TEST_HEADER include/spdk/nbd.h 00:02:25.444 TEST_HEADER include/spdk/notify.h 00:02:25.444 TEST_HEADER include/spdk/nvme_intel.h 00:02:25.444 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:25.444 TEST_HEADER include/spdk/nvme.h 00:02:25.444 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:25.444 TEST_HEADER include/spdk/nvme_spec.h 00:02:25.444 CC app/spdk_dd/spdk_dd.o 00:02:25.444 TEST_HEADER include/spdk/nvme_zns.h 00:02:25.444 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:25.444 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:25.444 TEST_HEADER include/spdk/nvmf.h 00:02:25.444 TEST_HEADER include/spdk/nvmf_transport.h 00:02:25.444 TEST_HEADER include/spdk/nvmf_spec.h 00:02:25.444 TEST_HEADER include/spdk/opal.h 00:02:25.444 TEST_HEADER include/spdk/pci_ids.h 00:02:25.444 TEST_HEADER include/spdk/opal_spec.h 00:02:25.444 TEST_HEADER include/spdk/queue.h 00:02:25.444 TEST_HEADER include/spdk/pipe.h 00:02:25.444 TEST_HEADER include/spdk/reduce.h 00:02:25.444 CC app/vhost/vhost.o 00:02:25.444 TEST_HEADER include/spdk/rpc.h 00:02:25.444 TEST_HEADER include/spdk/scheduler.h 00:02:25.444 TEST_HEADER include/spdk/scsi.h 00:02:25.444 TEST_HEADER include/spdk/stdinc.h 00:02:25.444 TEST_HEADER include/spdk/sock.h 00:02:25.444 TEST_HEADER include/spdk/scsi_spec.h 00:02:25.444 TEST_HEADER include/spdk/string.h 00:02:25.710 TEST_HEADER include/spdk/thread.h 00:02:25.710 TEST_HEADER include/spdk/trace_parser.h 00:02:25.710 TEST_HEADER include/spdk/tree.h 00:02:25.710 TEST_HEADER include/spdk/trace.h 00:02:25.710 TEST_HEADER include/spdk/util.h 00:02:25.710 TEST_HEADER include/spdk/ublk.h 00:02:25.710 TEST_HEADER include/spdk/uuid.h 00:02:25.710 TEST_HEADER include/spdk/version.h 00:02:25.710 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:25.710 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:25.710 TEST_HEADER include/spdk/vhost.h 00:02:25.710 TEST_HEADER include/spdk/vmd.h 00:02:25.710 CC app/spdk_tgt/spdk_tgt.o 00:02:25.710 TEST_HEADER include/spdk/xor.h 00:02:25.710 TEST_HEADER include/spdk/zipf.h 00:02:25.710 CXX test/cpp_headers/accel.o 00:02:25.710 CXX test/cpp_headers/assert.o 00:02:25.710 CXX test/cpp_headers/barrier.o 00:02:25.710 CXX test/cpp_headers/accel_module.o 00:02:25.710 CXX test/cpp_headers/bdev.o 00:02:25.710 CXX test/cpp_headers/base64.o 00:02:25.710 CXX test/cpp_headers/bdev_module.o 00:02:25.710 CXX test/cpp_headers/bit_array.o 00:02:25.710 CXX test/cpp_headers/bit_pool.o 00:02:25.710 CXX test/cpp_headers/bdev_zone.o 00:02:25.710 CXX test/cpp_headers/blobfs.o 00:02:25.710 CXX test/cpp_headers/blob_bdev.o 00:02:25.710 CXX test/cpp_headers/blobfs_bdev.o 00:02:25.710 CXX test/cpp_headers/blob.o 00:02:25.710 CXX test/cpp_headers/conf.o 00:02:25.710 CXX test/cpp_headers/config.o 00:02:25.710 CXX test/cpp_headers/cpuset.o 00:02:25.710 CXX test/cpp_headers/crc16.o 00:02:25.710 CXX test/cpp_headers/crc32.o 00:02:25.710 CXX test/cpp_headers/crc64.o 00:02:25.710 CXX test/cpp_headers/dif.o 00:02:25.710 CC test/event/event_perf/event_perf.o 00:02:25.710 CC test/event/reactor/reactor.o 00:02:25.710 CC test/nvme/aer/aer.o 00:02:25.710 CC test/thread/poller_perf/poller_perf.o 00:02:25.710 CC examples/nvme/hello_world/hello_world.o 00:02:25.710 CC test/nvme/fused_ordering/fused_ordering.o 00:02:25.710 CC test/nvme/sgl/sgl.o 00:02:25.710 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:25.710 CC test/nvme/err_injection/err_injection.o 00:02:25.710 CC test/nvme/fdp/fdp.o 00:02:25.710 CC test/env/memory/memory_ut.o 00:02:25.710 CC test/event/reactor_perf/reactor_perf.o 00:02:25.710 CC test/nvme/e2edp/nvme_dp.o 00:02:25.710 CC test/env/vtophys/vtophys.o 00:02:25.710 CC test/nvme/compliance/nvme_compliance.o 00:02:25.710 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:25.710 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:25.710 CC test/env/pci/pci_ut.o 00:02:25.710 CC examples/nvme/reconnect/reconnect.o 00:02:25.710 CC examples/accel/perf/accel_perf.o 00:02:25.710 CC examples/vmd/lsvmd/lsvmd.o 00:02:25.710 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:25.710 CC test/nvme/connect_stress/connect_stress.o 00:02:25.710 CC test/app/jsoncat/jsoncat.o 00:02:25.710 CC test/nvme/reset/reset.o 00:02:25.710 CC test/nvme/reserve/reserve.o 00:02:25.710 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:25.710 CC test/event/app_repeat/app_repeat.o 00:02:25.710 CC examples/nvme/hotplug/hotplug.o 00:02:25.710 CC examples/idxd/perf/perf.o 00:02:25.710 CC test/nvme/overhead/overhead.o 00:02:25.710 CC examples/nvme/arbitration/arbitration.o 00:02:25.710 CC test/nvme/simple_copy/simple_copy.o 00:02:25.710 CC test/nvme/startup/startup.o 00:02:25.710 CC examples/util/zipf/zipf.o 00:02:25.710 CC examples/nvme/abort/abort.o 00:02:25.710 CC test/nvme/boot_partition/boot_partition.o 00:02:25.710 CC examples/ioat/verify/verify.o 00:02:25.710 CC examples/vmd/led/led.o 00:02:25.710 CC test/app/histogram_perf/histogram_perf.o 00:02:25.710 CC examples/sock/hello_world/hello_sock.o 00:02:25.710 CC test/event/scheduler/scheduler.o 00:02:25.710 CC app/fio/nvme/fio_plugin.o 00:02:25.710 CC examples/ioat/perf/perf.o 00:02:25.710 CC examples/blob/cli/blobcli.o 00:02:25.710 CC test/accel/dif/dif.o 00:02:25.710 CC test/nvme/cuse/cuse.o 00:02:25.710 CC examples/bdev/hello_world/hello_bdev.o 00:02:25.710 CC test/dma/test_dma/test_dma.o 00:02:25.710 CC test/blobfs/mkfs/mkfs.o 00:02:25.710 CC examples/bdev/bdevperf/bdevperf.o 00:02:25.710 CC test/app/stub/stub.o 00:02:25.710 CC test/bdev/bdevio/bdevio.o 00:02:25.710 CC app/fio/bdev/fio_plugin.o 00:02:25.710 CC test/app/bdev_svc/bdev_svc.o 00:02:25.710 CC examples/thread/thread/thread_ex.o 00:02:25.710 CC examples/nvmf/nvmf/nvmf.o 00:02:25.710 CC examples/blob/hello_world/hello_blob.o 00:02:25.710 LINK spdk_lspci 00:02:25.710 CC test/env/mem_callbacks/mem_callbacks.o 00:02:25.974 CC test/lvol/esnap/esnap.o 00:02:25.974 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:25.974 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:25.974 LINK rpc_client_test 00:02:25.974 LINK nvmf_tgt 00:02:25.974 LINK spdk_nvme_discover 00:02:25.974 LINK vhost 00:02:25.974 LINK reactor 00:02:25.974 LINK iscsi_tgt 00:02:25.974 LINK reactor_perf 00:02:25.974 LINK lsvmd 00:02:25.974 LINK spdk_trace_record 00:02:25.974 LINK jsoncat 00:02:25.974 LINK interrupt_tgt 00:02:25.974 LINK poller_perf 00:02:25.974 LINK led 00:02:25.974 LINK zipf 00:02:25.974 LINK event_perf 00:02:25.974 LINK spdk_tgt 00:02:25.974 LINK boot_partition 00:02:25.974 LINK err_injection 00:02:25.974 LINK startup 00:02:25.974 LINK doorbell_aers 00:02:26.237 LINK vtophys 00:02:26.237 LINK app_repeat 00:02:26.237 LINK histogram_perf 00:02:26.237 LINK stub 00:02:26.237 LINK bdev_svc 00:02:26.237 CXX test/cpp_headers/dma.o 00:02:26.237 LINK hello_world 00:02:26.237 CXX test/cpp_headers/endian.o 00:02:26.237 CXX test/cpp_headers/env_dpdk.o 00:02:26.237 CXX test/cpp_headers/env.o 00:02:26.237 CXX test/cpp_headers/event.o 00:02:26.237 LINK scheduler 00:02:26.237 LINK env_dpdk_post_init 00:02:26.237 CXX test/cpp_headers/fd_group.o 00:02:26.237 LINK simple_copy 00:02:26.237 LINK cmb_copy 00:02:26.237 LINK pmr_persistence 00:02:26.237 LINK connect_stress 00:02:26.237 LINK fused_ordering 00:02:26.237 CXX test/cpp_headers/fd.o 00:02:26.237 LINK hello_bdev 00:02:26.237 LINK sgl 00:02:26.237 LINK hotplug 00:02:26.237 LINK mkfs 00:02:26.237 CXX test/cpp_headers/file.o 00:02:26.237 CXX test/cpp_headers/gpt_spec.o 00:02:26.237 CXX test/cpp_headers/ftl.o 00:02:26.237 LINK hello_sock 00:02:26.237 LINK reserve 00:02:26.237 CXX test/cpp_headers/hexlify.o 00:02:26.237 CXX test/cpp_headers/histogram_data.o 00:02:26.237 LINK spdk_dd 00:02:26.237 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:26.237 CXX test/cpp_headers/idxd.o 00:02:26.237 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:26.237 LINK ioat_perf 00:02:26.237 CXX test/cpp_headers/idxd_spec.o 00:02:26.237 LINK fdp 00:02:26.237 CXX test/cpp_headers/init.o 00:02:26.237 CXX test/cpp_headers/ioat.o 00:02:26.237 LINK verify 00:02:26.237 CXX test/cpp_headers/ioat_spec.o 00:02:26.237 CXX test/cpp_headers/iscsi_spec.o 00:02:26.237 LINK nvme_dp 00:02:26.237 LINK thread 00:02:26.237 LINK aer 00:02:26.237 CXX test/cpp_headers/json.o 00:02:26.237 LINK spdk_trace 00:02:26.507 CXX test/cpp_headers/jsonrpc.o 00:02:26.507 LINK arbitration 00:02:26.507 LINK reset 00:02:26.507 LINK hello_blob 00:02:26.507 CXX test/cpp_headers/likely.o 00:02:26.507 LINK idxd_perf 00:02:26.507 LINK overhead 00:02:26.507 CXX test/cpp_headers/log.o 00:02:26.507 CXX test/cpp_headers/lvol.o 00:02:26.507 CXX test/cpp_headers/memory.o 00:02:26.507 CXX test/cpp_headers/mmio.o 00:02:26.507 LINK test_dma 00:02:26.507 LINK abort 00:02:26.507 LINK nvme_compliance 00:02:26.507 CXX test/cpp_headers/nbd.o 00:02:26.507 CXX test/cpp_headers/notify.o 00:02:26.507 LINK pci_ut 00:02:26.507 CXX test/cpp_headers/nvme.o 00:02:26.507 LINK reconnect 00:02:26.507 CXX test/cpp_headers/nvme_intel.o 00:02:26.507 CXX test/cpp_headers/nvme_ocssd.o 00:02:26.507 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:26.507 CXX test/cpp_headers/nvme_spec.o 00:02:26.507 CXX test/cpp_headers/nvme_zns.o 00:02:26.507 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:26.507 LINK nvmf 00:02:26.507 CXX test/cpp_headers/nvmf_cmd.o 00:02:26.507 CXX test/cpp_headers/nvmf.o 00:02:26.507 CXX test/cpp_headers/nvmf_spec.o 00:02:26.507 CXX test/cpp_headers/opal.o 00:02:26.507 CXX test/cpp_headers/nvmf_transport.o 00:02:26.507 CXX test/cpp_headers/opal_spec.o 00:02:26.507 CXX test/cpp_headers/pipe.o 00:02:26.507 CXX test/cpp_headers/queue.o 00:02:26.507 CXX test/cpp_headers/pci_ids.o 00:02:26.507 CXX test/cpp_headers/reduce.o 00:02:26.507 CXX test/cpp_headers/rpc.o 00:02:26.507 LINK dif 00:02:26.507 CXX test/cpp_headers/scheduler.o 00:02:26.507 LINK bdevio 00:02:26.507 CXX test/cpp_headers/scsi.o 00:02:26.507 CXX test/cpp_headers/sock.o 00:02:26.507 CXX test/cpp_headers/scsi_spec.o 00:02:26.507 CXX test/cpp_headers/stdinc.o 00:02:26.507 CXX test/cpp_headers/string.o 00:02:26.507 CXX test/cpp_headers/thread.o 00:02:26.507 CXX test/cpp_headers/trace.o 00:02:26.507 CXX test/cpp_headers/trace_parser.o 00:02:26.507 CXX test/cpp_headers/tree.o 00:02:26.507 CXX test/cpp_headers/util.o 00:02:26.507 CXX test/cpp_headers/ublk.o 00:02:26.507 CXX test/cpp_headers/uuid.o 00:02:26.507 CXX test/cpp_headers/version.o 00:02:26.507 CXX test/cpp_headers/vfio_user_spec.o 00:02:26.507 CXX test/cpp_headers/vfio_user_pci.o 00:02:26.766 CXX test/cpp_headers/vhost.o 00:02:26.766 CXX test/cpp_headers/vmd.o 00:02:26.766 CXX test/cpp_headers/xor.o 00:02:26.766 LINK accel_perf 00:02:26.766 CXX test/cpp_headers/zipf.o 00:02:26.766 LINK blobcli 00:02:26.766 LINK nvme_manage 00:02:26.766 LINK nvme_fuzz 00:02:26.766 LINK spdk_nvme 00:02:26.766 LINK spdk_bdev 00:02:26.766 LINK vhost_fuzz 00:02:27.025 LINK spdk_nvme_perf 00:02:27.025 LINK mem_callbacks 00:02:27.025 LINK spdk_top 00:02:27.025 LINK bdevperf 00:02:27.025 LINK spdk_nvme_identify 00:02:27.284 LINK memory_ut 00:02:27.284 LINK cuse 00:02:27.852 LINK iscsi_fuzz 00:02:29.759 LINK esnap 00:02:29.759 00:02:29.759 real 0m40.502s 00:02:29.759 user 6m37.911s 00:02:29.760 sys 3m15.300s 00:02:29.760 07:21:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:29.760 07:21:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.760 ************************************ 00:02:29.760 END TEST make 00:02:29.760 ************************************ 00:02:29.760 07:21:33 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:29.760 07:21:33 -- nvmf/common.sh@7 -- # uname -s 00:02:29.760 07:21:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:29.760 07:21:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:29.760 07:21:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:29.760 07:21:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:29.760 07:21:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:29.760 07:21:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:29.760 07:21:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:29.760 07:21:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:29.760 07:21:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:29.760 07:21:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:29.760 07:21:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:02:29.760 07:21:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:02:29.760 07:21:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:29.760 07:21:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:29.760 07:21:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:29.760 07:21:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:29.760 07:21:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:29.760 07:21:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.760 07:21:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.760 07:21:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.760 07:21:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.760 07:21:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.760 07:21:33 -- paths/export.sh@5 -- # export PATH 00:02:29.760 07:21:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.760 07:21:33 -- nvmf/common.sh@46 -- # : 0 00:02:29.760 07:21:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:29.760 07:21:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:29.760 07:21:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:29.760 07:21:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:29.760 07:21:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:29.760 07:21:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:29.760 07:21:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:29.760 07:21:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:29.760 07:21:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:29.760 07:21:33 -- spdk/autotest.sh@32 -- # uname -s 00:02:29.760 07:21:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:29.760 07:21:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:29.760 07:21:33 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:29.760 07:21:33 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:29.760 07:21:33 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:29.760 07:21:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:29.760 07:21:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:29.760 07:21:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:30.019 07:21:33 -- spdk/autotest.sh@48 -- # udevadm_pid=3893769 00:02:30.019 07:21:33 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:30.019 07:21:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:30.019 07:21:33 -- spdk/autotest.sh@54 -- # echo 3893771 00:02:30.019 07:21:33 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:30.019 07:21:33 -- spdk/autotest.sh@56 -- # echo 3893772 00:02:30.019 07:21:33 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:30.019 07:21:33 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:30.019 07:21:33 -- spdk/autotest.sh@60 -- # echo 3893773 00:02:30.020 07:21:33 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:30.020 07:21:33 -- spdk/autotest.sh@62 -- # echo 3893774 00:02:30.020 07:21:33 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:30.020 07:21:33 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:30.020 07:21:33 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:30.020 07:21:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:30.020 07:21:33 -- common/autotest_common.sh@10 -- # set +x 00:02:30.020 07:21:33 -- spdk/autotest.sh@70 -- # create_test_list 00:02:30.020 07:21:33 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:30.020 07:21:33 -- common/autotest_common.sh@10 -- # set +x 00:02:30.020 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:30.020 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:30.020 07:21:33 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:30.020 07:21:33 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.020 07:21:33 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.020 07:21:33 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:30.020 07:21:33 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.020 07:21:33 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:30.020 07:21:33 -- common/autotest_common.sh@1440 -- # uname 00:02:30.020 07:21:33 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:30.020 07:21:33 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:30.020 07:21:33 -- common/autotest_common.sh@1460 -- # uname 00:02:30.020 07:21:33 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:30.020 07:21:33 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:30.020 07:21:33 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:30.020 07:21:33 -- spdk/autotest.sh@83 -- # hash lcov 00:02:30.020 07:21:33 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:30.020 07:21:33 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:30.020 --rc lcov_branch_coverage=1 00:02:30.020 --rc lcov_function_coverage=1 00:02:30.020 --rc genhtml_branch_coverage=1 00:02:30.020 --rc genhtml_function_coverage=1 00:02:30.020 --rc genhtml_legend=1 00:02:30.020 --rc geninfo_all_blocks=1 00:02:30.020 ' 00:02:30.020 07:21:33 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:30.020 --rc lcov_branch_coverage=1 00:02:30.020 --rc lcov_function_coverage=1 00:02:30.020 --rc genhtml_branch_coverage=1 00:02:30.020 --rc genhtml_function_coverage=1 00:02:30.020 --rc genhtml_legend=1 00:02:30.020 --rc geninfo_all_blocks=1 00:02:30.020 ' 00:02:30.020 07:21:33 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:30.020 --rc lcov_branch_coverage=1 00:02:30.020 --rc lcov_function_coverage=1 00:02:30.020 --rc genhtml_branch_coverage=1 00:02:30.020 --rc genhtml_function_coverage=1 00:02:30.020 --rc genhtml_legend=1 00:02:30.020 --rc geninfo_all_blocks=1 00:02:30.020 --no-external' 00:02:30.020 07:21:33 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:30.020 --rc lcov_branch_coverage=1 00:02:30.020 --rc lcov_function_coverage=1 00:02:30.020 --rc genhtml_branch_coverage=1 00:02:30.020 --rc genhtml_function_coverage=1 00:02:30.020 --rc genhtml_legend=1 00:02:30.020 --rc geninfo_all_blocks=1 00:02:30.020 --no-external' 00:02:30.020 07:21:33 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:30.020 lcov: LCOV version 1.15 00:02:30.020 07:21:33 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:31.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:31.399 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:31.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:31.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:31.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:31.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:31.920 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:31.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:31.920 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:31.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:31.920 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:31.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:31.920 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:31.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:31.920 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:41.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:41.905 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:41.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:41.905 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:41.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:41.905 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:54.119 07:21:56 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:54.119 07:21:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:54.119 07:21:56 -- common/autotest_common.sh@10 -- # set +x 00:02:54.119 07:21:56 -- spdk/autotest.sh@102 -- # rm -f 00:02:54.119 07:21:56 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.687 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:54.687 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:54.946 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:54.946 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:54.946 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:54.946 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:54.946 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:54.946 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:54.946 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:54.946 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:54.946 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:54.946 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:54.946 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:54.946 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:54.946 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:55.206 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:55.206 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:55.206 07:21:59 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:55.206 07:21:59 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:55.206 07:21:59 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:55.206 07:21:59 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:55.206 07:21:59 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:55.206 07:21:59 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:55.206 07:21:59 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:55.206 07:21:59 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:55.206 07:21:59 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:55.206 07:21:59 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:55.206 07:21:59 -- spdk/autotest.sh@121 -- # grep -v p 00:02:55.206 07:21:59 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:02:55.206 07:21:59 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:55.206 07:21:59 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:55.206 07:21:59 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:55.206 07:21:59 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:55.206 07:21:59 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:55.206 No valid GPT data, bailing 00:02:55.206 07:21:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:55.206 07:21:59 -- scripts/common.sh@393 -- # pt= 00:02:55.206 07:21:59 -- scripts/common.sh@394 -- # return 1 00:02:55.206 07:21:59 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:55.206 1+0 records in 00:02:55.206 1+0 records out 00:02:55.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00145718 s, 720 MB/s 00:02:55.206 07:21:59 -- spdk/autotest.sh@129 -- # sync 00:02:55.206 07:21:59 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:55.206 07:21:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:55.206 07:21:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:00.482 07:22:04 -- spdk/autotest.sh@135 -- # uname -s 00:03:00.482 07:22:04 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:00.482 07:22:04 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:00.482 07:22:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:00.482 07:22:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:00.482 07:22:04 -- common/autotest_common.sh@10 -- # set +x 00:03:00.482 ************************************ 00:03:00.482 START TEST setup.sh 00:03:00.482 ************************************ 00:03:00.482 07:22:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:00.482 * Looking for test storage... 00:03:00.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:00.482 07:22:04 -- setup/test-setup.sh@10 -- # uname -s 00:03:00.482 07:22:04 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:00.482 07:22:04 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:00.482 07:22:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:00.482 07:22:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:00.482 07:22:04 -- common/autotest_common.sh@10 -- # set +x 00:03:00.482 ************************************ 00:03:00.482 START TEST acl 00:03:00.482 ************************************ 00:03:00.482 07:22:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:00.482 * Looking for test storage... 00:03:00.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:00.482 07:22:04 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:00.482 07:22:04 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:00.482 07:22:04 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:00.482 07:22:04 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:00.482 07:22:04 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:00.482 07:22:04 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:00.482 07:22:04 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:00.482 07:22:04 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:00.482 07:22:04 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:00.482 07:22:04 -- setup/acl.sh@12 -- # devs=() 00:03:00.482 07:22:04 -- setup/acl.sh@12 -- # declare -a devs 00:03:00.482 07:22:04 -- setup/acl.sh@13 -- # drivers=() 00:03:00.482 07:22:04 -- setup/acl.sh@13 -- # declare -A drivers 00:03:00.482 07:22:04 -- setup/acl.sh@51 -- # setup reset 00:03:00.482 07:22:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:00.482 07:22:04 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.773 07:22:07 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:03.773 07:22:07 -- setup/acl.sh@16 -- # local dev driver 00:03:03.773 07:22:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.773 07:22:07 -- setup/acl.sh@15 -- # setup output status 00:03:03.773 07:22:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.773 07:22:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:06.309 Hugepages 00:03:06.309 node hugesize free / total 00:03:06.309 07:22:09 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:06.309 07:22:09 -- setup/acl.sh@19 -- # continue 00:03:06.309 07:22:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:09 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:06.309 07:22:09 -- setup/acl.sh@19 -- # continue 00:03:06.309 07:22:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:09 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:06.309 07:22:09 -- setup/acl.sh@19 -- # continue 00:03:06.309 07:22:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 00:03:06.309 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:06.309 07:22:09 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:06.309 07:22:09 -- setup/acl.sh@19 -- # continue 00:03:06.309 07:22:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:09 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:06.309 07:22:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:09 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:09 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:06.309 07:22:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:09 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:09 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:06.309 07:22:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:09 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:09 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:10 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:10 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:10 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:10 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:10 -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:06.309 07:22:10 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:06.309 07:22:10 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:06.309 07:22:10 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:06.309 07:22:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:10 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:06.309 07:22:10 -- setup/acl.sh@20 -- # continue 00:03:06.309 07:22:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.309 07:22:10 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:06.309 07:22:10 -- setup/acl.sh@54 -- # run_test denied denied 00:03:06.309 07:22:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:06.309 07:22:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:06.309 07:22:10 -- common/autotest_common.sh@10 -- # set +x 00:03:06.309 ************************************ 00:03:06.309 START TEST denied 00:03:06.309 ************************************ 00:03:06.309 07:22:10 -- common/autotest_common.sh@1104 -- # denied 00:03:06.309 07:22:10 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:06.309 07:22:10 -- setup/acl.sh@38 -- # setup output config 00:03:06.309 07:22:10 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:06.309 07:22:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.309 07:22:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:09.602 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:09.602 07:22:13 -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:09.602 07:22:13 -- setup/acl.sh@28 -- # local dev driver 00:03:09.602 07:22:13 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:09.602 07:22:13 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:09.602 07:22:13 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:09.602 07:22:13 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:09.602 07:22:13 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:09.602 07:22:13 -- setup/acl.sh@41 -- # setup reset 00:03:09.602 07:22:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:09.602 07:22:13 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.800 00:03:13.800 real 0m6.930s 00:03:13.800 user 0m2.305s 00:03:13.800 sys 0m3.937s 00:03:13.800 07:22:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.800 07:22:17 -- common/autotest_common.sh@10 -- # set +x 00:03:13.800 ************************************ 00:03:13.800 END TEST denied 00:03:13.800 ************************************ 00:03:13.800 07:22:17 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:13.800 07:22:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:13.800 07:22:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:13.800 07:22:17 -- common/autotest_common.sh@10 -- # set +x 00:03:13.800 ************************************ 00:03:13.800 START TEST allowed 00:03:13.800 ************************************ 00:03:13.800 07:22:17 -- common/autotest_common.sh@1104 -- # allowed 00:03:13.800 07:22:17 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:13.800 07:22:17 -- setup/acl.sh@45 -- # setup output config 00:03:13.800 07:22:17 -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:13.800 07:22:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.800 07:22:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.091 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:17.091 07:22:20 -- setup/acl.sh@47 -- # verify 00:03:17.091 07:22:20 -- setup/acl.sh@28 -- # local dev driver 00:03:17.091 07:22:20 -- setup/acl.sh@48 -- # setup reset 00:03:17.091 07:22:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:17.091 07:22:20 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.387 00:03:20.387 real 0m6.992s 00:03:20.387 user 0m2.217s 00:03:20.387 sys 0m3.965s 00:03:20.387 07:22:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.387 07:22:24 -- common/autotest_common.sh@10 -- # set +x 00:03:20.387 ************************************ 00:03:20.387 END TEST allowed 00:03:20.387 ************************************ 00:03:20.387 00:03:20.387 real 0m19.925s 00:03:20.387 user 0m6.833s 00:03:20.387 sys 0m11.819s 00:03:20.387 07:22:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.387 07:22:24 -- common/autotest_common.sh@10 -- # set +x 00:03:20.387 ************************************ 00:03:20.387 END TEST acl 00:03:20.387 ************************************ 00:03:20.387 07:22:24 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:20.387 07:22:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:20.387 07:22:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:20.387 07:22:24 -- common/autotest_common.sh@10 -- # set +x 00:03:20.387 ************************************ 00:03:20.387 START TEST hugepages 00:03:20.387 ************************************ 00:03:20.387 07:22:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:20.387 * Looking for test storage... 00:03:20.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:20.387 07:22:24 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:20.387 07:22:24 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:20.387 07:22:24 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:20.387 07:22:24 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:20.387 07:22:24 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:20.387 07:22:24 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:20.387 07:22:24 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:20.387 07:22:24 -- setup/common.sh@18 -- # local node= 00:03:20.387 07:22:24 -- setup/common.sh@19 -- # local var val 00:03:20.387 07:22:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.387 07:22:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.387 07:22:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.387 07:22:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.387 07:22:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.387 07:22:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.387 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.387 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.387 07:22:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 69424860 kB' 'MemAvailable: 73269188 kB' 'Buffers: 4136 kB' 'Cached: 16082128 kB' 'SwapCached: 0 kB' 'Active: 12753400 kB' 'Inactive: 3856048 kB' 'Active(anon): 12303956 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526460 kB' 'Mapped: 162220 kB' 'Shmem: 11780772 kB' 'KReclaimable: 270028 kB' 'Slab: 1153976 kB' 'SReclaimable: 270028 kB' 'SUnreclaim: 883948 kB' 'KernelStack: 20256 kB' 'PageTables: 9708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52947900 kB' 'Committed_AS: 13623104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215596 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:20.387 07:22:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.387 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.387 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.387 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.387 07:22:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.387 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.387 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.387 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.387 07:22:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.388 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.388 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # continue 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.389 07:22:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.389 07:22:24 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.389 07:22:24 -- setup/common.sh@33 -- # echo 2048 00:03:20.389 07:22:24 -- setup/common.sh@33 -- # return 0 00:03:20.389 07:22:24 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:20.389 07:22:24 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:20.389 07:22:24 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:20.389 07:22:24 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:20.389 07:22:24 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:20.389 07:22:24 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:20.389 07:22:24 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:20.389 07:22:24 -- setup/hugepages.sh@207 -- # get_nodes 00:03:20.389 07:22:24 -- setup/hugepages.sh@27 -- # local node 00:03:20.389 07:22:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.389 07:22:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:20.389 07:22:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.389 07:22:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:20.389 07:22:24 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.389 07:22:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.389 07:22:24 -- setup/hugepages.sh@208 -- # clear_hp 00:03:20.389 07:22:24 -- setup/hugepages.sh@37 -- # local node hp 00:03:20.389 07:22:24 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:20.389 07:22:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.389 07:22:24 -- setup/hugepages.sh@41 -- # echo 0 00:03:20.389 07:22:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.389 07:22:24 -- setup/hugepages.sh@41 -- # echo 0 00:03:20.389 07:22:24 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:20.389 07:22:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.389 07:22:24 -- setup/hugepages.sh@41 -- # echo 0 00:03:20.389 07:22:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.389 07:22:24 -- setup/hugepages.sh@41 -- # echo 0 00:03:20.389 07:22:24 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:20.389 07:22:24 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:20.389 07:22:24 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:20.389 07:22:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:20.389 07:22:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:20.389 07:22:24 -- common/autotest_common.sh@10 -- # set +x 00:03:20.389 ************************************ 00:03:20.389 START TEST default_setup 00:03:20.389 ************************************ 00:03:20.389 07:22:24 -- common/autotest_common.sh@1104 -- # default_setup 00:03:20.389 07:22:24 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:20.389 07:22:24 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.389 07:22:24 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:20.389 07:22:24 -- setup/hugepages.sh@51 -- # shift 00:03:20.389 07:22:24 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:20.389 07:22:24 -- setup/hugepages.sh@52 -- # local node_ids 00:03:20.389 07:22:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.389 07:22:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.389 07:22:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:20.389 07:22:24 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:20.389 07:22:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.389 07:22:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.389 07:22:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.389 07:22:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.389 07:22:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.389 07:22:24 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:20.389 07:22:24 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.389 07:22:24 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:20.389 07:22:24 -- setup/hugepages.sh@73 -- # return 0 00:03:20.389 07:22:24 -- setup/hugepages.sh@137 -- # setup output 00:03:20.389 07:22:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.389 07:22:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.924 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:22.924 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:22.924 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:23.237 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:23.237 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:23.237 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:23.237 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:23.237 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:23.238 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:23.238 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:23.238 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:23.238 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:23.238 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:23.238 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:23.238 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:23.238 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:23.900 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:24.161 07:22:27 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:24.161 07:22:27 -- setup/hugepages.sh@89 -- # local node 00:03:24.161 07:22:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.161 07:22:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.161 07:22:27 -- setup/hugepages.sh@92 -- # local surp 00:03:24.161 07:22:27 -- setup/hugepages.sh@93 -- # local resv 00:03:24.161 07:22:27 -- setup/hugepages.sh@94 -- # local anon 00:03:24.161 07:22:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.161 07:22:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.161 07:22:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.161 07:22:27 -- setup/common.sh@18 -- # local node= 00:03:24.161 07:22:27 -- setup/common.sh@19 -- # local var val 00:03:24.161 07:22:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.161 07:22:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.161 07:22:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.161 07:22:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.161 07:22:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.161 07:22:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.161 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.161 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.161 07:22:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71584464 kB' 'MemAvailable: 75428776 kB' 'Buffers: 4136 kB' 'Cached: 16082228 kB' 'SwapCached: 0 kB' 'Active: 12767508 kB' 'Inactive: 3856048 kB' 'Active(anon): 12318064 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541152 kB' 'Mapped: 162136 kB' 'Shmem: 11780872 kB' 'KReclaimable: 269996 kB' 'Slab: 1153288 kB' 'SReclaimable: 269996 kB' 'SUnreclaim: 883292 kB' 'KernelStack: 19648 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13637816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215164 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:24.161 07:22:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.161 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.161 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.161 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.161 07:22:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.161 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.161 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.161 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.161 07:22:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.161 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.161 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.161 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.161 07:22:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.161 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.161 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.161 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.161 07:22:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.161 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.161 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.162 07:22:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.162 07:22:27 -- setup/common.sh@33 -- # echo 0 00:03:24.162 07:22:27 -- setup/common.sh@33 -- # return 0 00:03:24.162 07:22:27 -- setup/hugepages.sh@97 -- # anon=0 00:03:24.162 07:22:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.162 07:22:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.162 07:22:27 -- setup/common.sh@18 -- # local node= 00:03:24.162 07:22:27 -- setup/common.sh@19 -- # local var val 00:03:24.162 07:22:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.162 07:22:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.162 07:22:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.162 07:22:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.162 07:22:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.162 07:22:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.162 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71586028 kB' 'MemAvailable: 75430340 kB' 'Buffers: 4136 kB' 'Cached: 16082232 kB' 'SwapCached: 0 kB' 'Active: 12768980 kB' 'Inactive: 3856048 kB' 'Active(anon): 12319536 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542160 kB' 'Mapped: 162212 kB' 'Shmem: 11780876 kB' 'KReclaimable: 269996 kB' 'Slab: 1153304 kB' 'SReclaimable: 269996 kB' 'SUnreclaim: 883308 kB' 'KernelStack: 19696 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13637828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215116 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.163 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.163 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.164 07:22:27 -- setup/common.sh@33 -- # echo 0 00:03:24.164 07:22:27 -- setup/common.sh@33 -- # return 0 00:03:24.164 07:22:27 -- setup/hugepages.sh@99 -- # surp=0 00:03:24.164 07:22:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.164 07:22:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.164 07:22:27 -- setup/common.sh@18 -- # local node= 00:03:24.164 07:22:27 -- setup/common.sh@19 -- # local var val 00:03:24.164 07:22:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.164 07:22:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.164 07:22:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.164 07:22:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.164 07:22:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.164 07:22:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71586556 kB' 'MemAvailable: 75430868 kB' 'Buffers: 4136 kB' 'Cached: 16082244 kB' 'SwapCached: 0 kB' 'Active: 12768388 kB' 'Inactive: 3856048 kB' 'Active(anon): 12318944 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541488 kB' 'Mapped: 162116 kB' 'Shmem: 11780888 kB' 'KReclaimable: 269996 kB' 'Slab: 1153268 kB' 'SReclaimable: 269996 kB' 'SUnreclaim: 883272 kB' 'KernelStack: 19664 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13637844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215132 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:27 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.164 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.164 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.165 07:22:28 -- setup/common.sh@33 -- # echo 0 00:03:24.165 07:22:28 -- setup/common.sh@33 -- # return 0 00:03:24.165 07:22:28 -- setup/hugepages.sh@100 -- # resv=0 00:03:24.165 07:22:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.165 nr_hugepages=1024 00:03:24.165 07:22:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.165 resv_hugepages=0 00:03:24.165 07:22:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.165 surplus_hugepages=0 00:03:24.165 07:22:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.165 anon_hugepages=0 00:03:24.165 07:22:28 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.165 07:22:28 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.165 07:22:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.165 07:22:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.165 07:22:28 -- setup/common.sh@18 -- # local node= 00:03:24.165 07:22:28 -- setup/common.sh@19 -- # local var val 00:03:24.165 07:22:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.165 07:22:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.165 07:22:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.165 07:22:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.165 07:22:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.165 07:22:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.165 07:22:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71586304 kB' 'MemAvailable: 75430616 kB' 'Buffers: 4136 kB' 'Cached: 16082256 kB' 'SwapCached: 0 kB' 'Active: 12768568 kB' 'Inactive: 3856048 kB' 'Active(anon): 12319124 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541692 kB' 'Mapped: 162116 kB' 'Shmem: 11780900 kB' 'KReclaimable: 269996 kB' 'Slab: 1153268 kB' 'SReclaimable: 269996 kB' 'SUnreclaim: 883272 kB' 'KernelStack: 19680 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13638008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215148 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.165 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.165 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.166 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.166 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.167 07:22:28 -- setup/common.sh@33 -- # echo 1024 00:03:24.167 07:22:28 -- setup/common.sh@33 -- # return 0 00:03:24.167 07:22:28 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.167 07:22:28 -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.167 07:22:28 -- setup/hugepages.sh@27 -- # local node 00:03:24.167 07:22:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.167 07:22:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.167 07:22:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.167 07:22:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:24.167 07:22:28 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.167 07:22:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.167 07:22:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.167 07:22:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.167 07:22:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.167 07:22:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.167 07:22:28 -- setup/common.sh@18 -- # local node=0 00:03:24.167 07:22:28 -- setup/common.sh@19 -- # local var val 00:03:24.167 07:22:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.167 07:22:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.167 07:22:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.167 07:22:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.167 07:22:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.167 07:22:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32630596 kB' 'MemFree: 17653896 kB' 'MemUsed: 14976700 kB' 'SwapCached: 0 kB' 'Active: 7594572 kB' 'Inactive: 3666000 kB' 'Active(anon): 7337912 kB' 'Inactive(anon): 0 kB' 'Active(file): 256660 kB' 'Inactive(file): 3666000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11074128 kB' 'Mapped: 125392 kB' 'AnonPages: 189724 kB' 'Shmem: 7151468 kB' 'KernelStack: 10984 kB' 'PageTables: 3584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 147068 kB' 'Slab: 630584 kB' 'SReclaimable: 147068 kB' 'SUnreclaim: 483516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.167 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.167 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # continue 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.168 07:22:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.168 07:22:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.168 07:22:28 -- setup/common.sh@33 -- # echo 0 00:03:24.168 07:22:28 -- setup/common.sh@33 -- # return 0 00:03:24.168 07:22:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.168 07:22:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.168 07:22:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.168 07:22:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.168 07:22:28 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:24.168 node0=1024 expecting 1024 00:03:24.168 07:22:28 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:24.168 00:03:24.168 real 0m3.775s 00:03:24.168 user 0m1.228s 00:03:24.168 sys 0m1.822s 00:03:24.168 07:22:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.168 07:22:28 -- common/autotest_common.sh@10 -- # set +x 00:03:24.168 ************************************ 00:03:24.168 END TEST default_setup 00:03:24.168 ************************************ 00:03:24.168 07:22:28 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:24.168 07:22:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:24.168 07:22:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:24.168 07:22:28 -- common/autotest_common.sh@10 -- # set +x 00:03:24.439 ************************************ 00:03:24.439 START TEST per_node_1G_alloc 00:03:24.439 ************************************ 00:03:24.439 07:22:28 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:24.439 07:22:28 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:24.439 07:22:28 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:24.439 07:22:28 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:24.439 07:22:28 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:24.439 07:22:28 -- setup/hugepages.sh@51 -- # shift 00:03:24.439 07:22:28 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:24.439 07:22:28 -- setup/hugepages.sh@52 -- # local node_ids 00:03:24.439 07:22:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.439 07:22:28 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:24.439 07:22:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:24.439 07:22:28 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:24.439 07:22:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.440 07:22:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:24.440 07:22:28 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.440 07:22:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.440 07:22:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.440 07:22:28 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:24.440 07:22:28 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.440 07:22:28 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:24.440 07:22:28 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.440 07:22:28 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:24.440 07:22:28 -- setup/hugepages.sh@73 -- # return 0 00:03:24.440 07:22:28 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:24.440 07:22:28 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:24.440 07:22:28 -- setup/hugepages.sh@146 -- # setup output 00:03:24.440 07:22:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.440 07:22:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.986 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.986 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.986 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.987 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.987 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.987 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.987 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.987 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.987 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.987 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.987 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.987 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.987 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.987 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.987 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.987 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.987 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.251 07:22:30 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:27.251 07:22:30 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:27.251 07:22:30 -- setup/hugepages.sh@89 -- # local node 00:03:27.251 07:22:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.251 07:22:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.251 07:22:30 -- setup/hugepages.sh@92 -- # local surp 00:03:27.251 07:22:30 -- setup/hugepages.sh@93 -- # local resv 00:03:27.251 07:22:30 -- setup/hugepages.sh@94 -- # local anon 00:03:27.251 07:22:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.251 07:22:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.251 07:22:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.251 07:22:30 -- setup/common.sh@18 -- # local node= 00:03:27.251 07:22:30 -- setup/common.sh@19 -- # local var val 00:03:27.251 07:22:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.251 07:22:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.251 07:22:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.251 07:22:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.251 07:22:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.251 07:22:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.251 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.251 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71554356 kB' 'MemAvailable: 75398652 kB' 'Buffers: 4136 kB' 'Cached: 16082344 kB' 'SwapCached: 0 kB' 'Active: 12766944 kB' 'Inactive: 3856048 kB' 'Active(anon): 12317500 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539812 kB' 'Mapped: 161016 kB' 'Shmem: 11780988 kB' 'KReclaimable: 269964 kB' 'Slab: 1153196 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 883232 kB' 'KernelStack: 19856 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13631200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215500 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.252 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.252 07:22:30 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.253 07:22:30 -- setup/common.sh@33 -- # echo 0 00:03:27.253 07:22:30 -- setup/common.sh@33 -- # return 0 00:03:27.253 07:22:30 -- setup/hugepages.sh@97 -- # anon=0 00:03:27.253 07:22:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.253 07:22:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.253 07:22:30 -- setup/common.sh@18 -- # local node= 00:03:27.253 07:22:30 -- setup/common.sh@19 -- # local var val 00:03:27.253 07:22:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.253 07:22:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.253 07:22:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.253 07:22:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.253 07:22:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.253 07:22:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.253 07:22:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71556392 kB' 'MemAvailable: 75400688 kB' 'Buffers: 4136 kB' 'Cached: 16082344 kB' 'SwapCached: 0 kB' 'Active: 12766788 kB' 'Inactive: 3856048 kB' 'Active(anon): 12317344 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539632 kB' 'Mapped: 161016 kB' 'Shmem: 11780988 kB' 'KReclaimable: 269964 kB' 'Slab: 1153172 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 883208 kB' 'KernelStack: 19696 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13632708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215468 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:27.253 07:22:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.253 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.253 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.254 07:22:31 -- setup/common.sh@33 -- # echo 0 00:03:27.254 07:22:31 -- setup/common.sh@33 -- # return 0 00:03:27.254 07:22:31 -- setup/hugepages.sh@99 -- # surp=0 00:03:27.254 07:22:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.254 07:22:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.254 07:22:31 -- setup/common.sh@18 -- # local node= 00:03:27.254 07:22:31 -- setup/common.sh@19 -- # local var val 00:03:27.254 07:22:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.254 07:22:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.254 07:22:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.254 07:22:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.254 07:22:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.254 07:22:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71555156 kB' 'MemAvailable: 75399452 kB' 'Buffers: 4136 kB' 'Cached: 16082356 kB' 'SwapCached: 0 kB' 'Active: 12767260 kB' 'Inactive: 3856048 kB' 'Active(anon): 12317816 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540064 kB' 'Mapped: 161032 kB' 'Shmem: 11781000 kB' 'KReclaimable: 269964 kB' 'Slab: 1153204 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 883240 kB' 'KernelStack: 19808 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13632724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215564 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.254 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.254 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.255 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.255 07:22:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.255 07:22:31 -- setup/common.sh@33 -- # echo 0 00:03:27.255 07:22:31 -- setup/common.sh@33 -- # return 0 00:03:27.256 07:22:31 -- setup/hugepages.sh@100 -- # resv=0 00:03:27.256 07:22:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.256 nr_hugepages=1024 00:03:27.256 07:22:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.256 resv_hugepages=0 00:03:27.256 07:22:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.256 surplus_hugepages=0 00:03:27.256 07:22:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.256 anon_hugepages=0 00:03:27.256 07:22:31 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.256 07:22:31 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.256 07:22:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.256 07:22:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.256 07:22:31 -- setup/common.sh@18 -- # local node= 00:03:27.256 07:22:31 -- setup/common.sh@19 -- # local var val 00:03:27.256 07:22:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.256 07:22:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.256 07:22:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.256 07:22:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.256 07:22:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.256 07:22:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.256 07:22:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71555364 kB' 'MemAvailable: 75399660 kB' 'Buffers: 4136 kB' 'Cached: 16082372 kB' 'SwapCached: 0 kB' 'Active: 12767196 kB' 'Inactive: 3856048 kB' 'Active(anon): 12317752 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539992 kB' 'Mapped: 161016 kB' 'Shmem: 11781016 kB' 'KReclaimable: 269964 kB' 'Slab: 1153204 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 883240 kB' 'KernelStack: 19840 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13632736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215580 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.256 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.256 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.257 07:22:31 -- setup/common.sh@33 -- # echo 1024 00:03:27.257 07:22:31 -- setup/common.sh@33 -- # return 0 00:03:27.257 07:22:31 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.257 07:22:31 -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.257 07:22:31 -- setup/hugepages.sh@27 -- # local node 00:03:27.257 07:22:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.257 07:22:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.257 07:22:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.257 07:22:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.257 07:22:31 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.257 07:22:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.257 07:22:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.257 07:22:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.257 07:22:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.257 07:22:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.257 07:22:31 -- setup/common.sh@18 -- # local node=0 00:03:27.257 07:22:31 -- setup/common.sh@19 -- # local var val 00:03:27.257 07:22:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.257 07:22:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.257 07:22:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.257 07:22:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.257 07:22:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.257 07:22:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.257 07:22:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32630596 kB' 'MemFree: 18696080 kB' 'MemUsed: 13934516 kB' 'SwapCached: 0 kB' 'Active: 7592916 kB' 'Inactive: 3666000 kB' 'Active(anon): 7336256 kB' 'Inactive(anon): 0 kB' 'Active(file): 256660 kB' 'Inactive(file): 3666000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11074240 kB' 'Mapped: 124844 kB' 'AnonPages: 187896 kB' 'Shmem: 7151580 kB' 'KernelStack: 10936 kB' 'PageTables: 3444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 147068 kB' 'Slab: 630456 kB' 'SReclaimable: 147068 kB' 'SUnreclaim: 483388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.257 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.257 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.258 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.258 07:22:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.258 07:22:31 -- setup/common.sh@33 -- # echo 0 00:03:27.258 07:22:31 -- setup/common.sh@33 -- # return 0 00:03:27.258 07:22:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.258 07:22:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.258 07:22:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.258 07:22:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:27.259 07:22:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.259 07:22:31 -- setup/common.sh@18 -- # local node=1 00:03:27.259 07:22:31 -- setup/common.sh@19 -- # local var val 00:03:27.259 07:22:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.259 07:22:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.259 07:22:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:27.259 07:22:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:27.259 07:22:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.259 07:22:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682304 kB' 'MemFree: 52855448 kB' 'MemUsed: 7826856 kB' 'SwapCached: 0 kB' 'Active: 5174512 kB' 'Inactive: 190048 kB' 'Active(anon): 4981728 kB' 'Inactive(anon): 0 kB' 'Active(file): 192784 kB' 'Inactive(file): 190048 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5012288 kB' 'Mapped: 36172 kB' 'AnonPages: 352292 kB' 'Shmem: 4629456 kB' 'KernelStack: 8936 kB' 'PageTables: 5332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122896 kB' 'Slab: 522748 kB' 'SReclaimable: 122896 kB' 'SUnreclaim: 399852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.259 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.259 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.260 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.260 07:22:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.260 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.260 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.260 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.260 07:22:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.260 07:22:31 -- setup/common.sh@32 -- # continue 00:03:27.260 07:22:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.260 07:22:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.260 07:22:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.260 07:22:31 -- setup/common.sh@33 -- # echo 0 00:03:27.260 07:22:31 -- setup/common.sh@33 -- # return 0 00:03:27.260 07:22:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.260 07:22:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.260 07:22:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.260 07:22:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.260 07:22:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:27.260 node0=512 expecting 512 00:03:27.260 07:22:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.260 07:22:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.260 07:22:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.260 07:22:31 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:27.260 node1=512 expecting 512 00:03:27.260 07:22:31 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:27.260 00:03:27.260 real 0m3.032s 00:03:27.260 user 0m1.238s 00:03:27.260 sys 0m1.861s 00:03:27.260 07:22:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.260 07:22:31 -- common/autotest_common.sh@10 -- # set +x 00:03:27.260 ************************************ 00:03:27.260 END TEST per_node_1G_alloc 00:03:27.260 ************************************ 00:03:27.260 07:22:31 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:27.260 07:22:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:27.260 07:22:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:27.260 07:22:31 -- common/autotest_common.sh@10 -- # set +x 00:03:27.260 ************************************ 00:03:27.260 START TEST even_2G_alloc 00:03:27.260 ************************************ 00:03:27.260 07:22:31 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:27.260 07:22:31 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:27.260 07:22:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:27.260 07:22:31 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.260 07:22:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.260 07:22:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:27.260 07:22:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.260 07:22:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.260 07:22:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.260 07:22:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.260 07:22:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.260 07:22:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.260 07:22:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.260 07:22:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.260 07:22:31 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:27.260 07:22:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.260 07:22:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:27.260 07:22:31 -- setup/hugepages.sh@83 -- # : 512 00:03:27.260 07:22:31 -- setup/hugepages.sh@84 -- # : 1 00:03:27.260 07:22:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.260 07:22:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:27.260 07:22:31 -- setup/hugepages.sh@83 -- # : 0 00:03:27.260 07:22:31 -- setup/hugepages.sh@84 -- # : 0 00:03:27.260 07:22:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.260 07:22:31 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:27.260 07:22:31 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:27.260 07:22:31 -- setup/hugepages.sh@153 -- # setup output 00:03:27.260 07:22:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.260 07:22:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.553 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.553 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.553 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.553 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.553 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.553 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.553 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.554 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.554 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.554 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.554 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.554 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.554 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.554 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.554 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.554 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.554 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.554 07:22:33 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:30.554 07:22:33 -- setup/hugepages.sh@89 -- # local node 00:03:30.554 07:22:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.554 07:22:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.554 07:22:33 -- setup/hugepages.sh@92 -- # local surp 00:03:30.554 07:22:33 -- setup/hugepages.sh@93 -- # local resv 00:03:30.554 07:22:33 -- setup/hugepages.sh@94 -- # local anon 00:03:30.554 07:22:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.554 07:22:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.554 07:22:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.554 07:22:33 -- setup/common.sh@18 -- # local node= 00:03:30.554 07:22:33 -- setup/common.sh@19 -- # local var val 00:03:30.554 07:22:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.554 07:22:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.554 07:22:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.554 07:22:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.554 07:22:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.554 07:22:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71555508 kB' 'MemAvailable: 75399804 kB' 'Buffers: 4136 kB' 'Cached: 16082456 kB' 'SwapCached: 0 kB' 'Active: 12767872 kB' 'Inactive: 3856048 kB' 'Active(anon): 12318428 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540956 kB' 'Mapped: 161044 kB' 'Shmem: 11781100 kB' 'KReclaimable: 269964 kB' 'Slab: 1152784 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 882820 kB' 'KernelStack: 19552 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13639256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215468 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.554 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.554 07:22:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.555 07:22:33 -- setup/common.sh@33 -- # echo 0 00:03:30.555 07:22:33 -- setup/common.sh@33 -- # return 0 00:03:30.555 07:22:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:30.555 07:22:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.555 07:22:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.555 07:22:33 -- setup/common.sh@18 -- # local node= 00:03:30.555 07:22:33 -- setup/common.sh@19 -- # local var val 00:03:30.555 07:22:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.555 07:22:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.555 07:22:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.555 07:22:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.555 07:22:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.555 07:22:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71555820 kB' 'MemAvailable: 75400116 kB' 'Buffers: 4136 kB' 'Cached: 16082460 kB' 'SwapCached: 0 kB' 'Active: 12767168 kB' 'Inactive: 3856048 kB' 'Active(anon): 12317724 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540344 kB' 'Mapped: 161020 kB' 'Shmem: 11781104 kB' 'KReclaimable: 269964 kB' 'Slab: 1152840 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 882876 kB' 'KernelStack: 19520 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13628452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215420 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.555 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.555 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:33 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.556 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.556 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.557 07:22:34 -- setup/common.sh@33 -- # echo 0 00:03:30.557 07:22:34 -- setup/common.sh@33 -- # return 0 00:03:30.557 07:22:34 -- setup/hugepages.sh@99 -- # surp=0 00:03:30.557 07:22:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.557 07:22:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.557 07:22:34 -- setup/common.sh@18 -- # local node= 00:03:30.557 07:22:34 -- setup/common.sh@19 -- # local var val 00:03:30.557 07:22:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.557 07:22:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.557 07:22:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.557 07:22:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.557 07:22:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.557 07:22:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71556164 kB' 'MemAvailable: 75400460 kB' 'Buffers: 4136 kB' 'Cached: 16082476 kB' 'SwapCached: 0 kB' 'Active: 12767020 kB' 'Inactive: 3856048 kB' 'Active(anon): 12317576 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540140 kB' 'Mapped: 161020 kB' 'Shmem: 11781120 kB' 'KReclaimable: 269964 kB' 'Slab: 1152832 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 882868 kB' 'KernelStack: 19504 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13628472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215420 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.557 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.557 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.558 07:22:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.558 07:22:34 -- setup/common.sh@33 -- # echo 0 00:03:30.558 07:22:34 -- setup/common.sh@33 -- # return 0 00:03:30.558 07:22:34 -- setup/hugepages.sh@100 -- # resv=0 00:03:30.558 07:22:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.558 nr_hugepages=1024 00:03:30.558 07:22:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.558 resv_hugepages=0 00:03:30.558 07:22:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.558 surplus_hugepages=0 00:03:30.558 07:22:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.558 anon_hugepages=0 00:03:30.558 07:22:34 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.558 07:22:34 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.558 07:22:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.558 07:22:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.558 07:22:34 -- setup/common.sh@18 -- # local node= 00:03:30.558 07:22:34 -- setup/common.sh@19 -- # local var val 00:03:30.558 07:22:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.558 07:22:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.558 07:22:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.558 07:22:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.558 07:22:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.558 07:22:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.558 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71556164 kB' 'MemAvailable: 75400460 kB' 'Buffers: 4136 kB' 'Cached: 16082492 kB' 'SwapCached: 0 kB' 'Active: 12767084 kB' 'Inactive: 3856048 kB' 'Active(anon): 12317640 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540196 kB' 'Mapped: 161020 kB' 'Shmem: 11781136 kB' 'KReclaimable: 269964 kB' 'Slab: 1152832 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 882868 kB' 'KernelStack: 19504 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13628620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215420 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.559 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.559 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.560 07:22:34 -- setup/common.sh@33 -- # echo 1024 00:03:30.560 07:22:34 -- setup/common.sh@33 -- # return 0 00:03:30.560 07:22:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.560 07:22:34 -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.560 07:22:34 -- setup/hugepages.sh@27 -- # local node 00:03:30.560 07:22:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.560 07:22:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.560 07:22:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.560 07:22:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.560 07:22:34 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.560 07:22:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.560 07:22:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.560 07:22:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.560 07:22:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.560 07:22:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.560 07:22:34 -- setup/common.sh@18 -- # local node=0 00:03:30.560 07:22:34 -- setup/common.sh@19 -- # local var val 00:03:30.560 07:22:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.560 07:22:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.560 07:22:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.560 07:22:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.560 07:22:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.560 07:22:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32630596 kB' 'MemFree: 18686624 kB' 'MemUsed: 13943972 kB' 'SwapCached: 0 kB' 'Active: 7592196 kB' 'Inactive: 3666000 kB' 'Active(anon): 7335536 kB' 'Inactive(anon): 0 kB' 'Active(file): 256660 kB' 'Inactive(file): 3666000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11074320 kB' 'Mapped: 125348 kB' 'AnonPages: 187012 kB' 'Shmem: 7151660 kB' 'KernelStack: 10904 kB' 'PageTables: 3344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 147068 kB' 'Slab: 630008 kB' 'SReclaimable: 147068 kB' 'SUnreclaim: 482940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.560 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.560 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@33 -- # echo 0 00:03:30.561 07:22:34 -- setup/common.sh@33 -- # return 0 00:03:30.561 07:22:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.561 07:22:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.561 07:22:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.561 07:22:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:30.561 07:22:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.561 07:22:34 -- setup/common.sh@18 -- # local node=1 00:03:30.561 07:22:34 -- setup/common.sh@19 -- # local var val 00:03:30.561 07:22:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.561 07:22:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.561 07:22:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:30.561 07:22:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:30.561 07:22:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.561 07:22:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682304 kB' 'MemFree: 52869964 kB' 'MemUsed: 7812340 kB' 'SwapCached: 0 kB' 'Active: 5176932 kB' 'Inactive: 190048 kB' 'Active(anon): 4984148 kB' 'Inactive(anon): 0 kB' 'Active(file): 192784 kB' 'Inactive(file): 190048 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5012324 kB' 'Mapped: 36172 kB' 'AnonPages: 354808 kB' 'Shmem: 4629492 kB' 'KernelStack: 8680 kB' 'PageTables: 4636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122896 kB' 'Slab: 522896 kB' 'SReclaimable: 122896 kB' 'SUnreclaim: 400000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.561 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.561 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # continue 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.562 07:22:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.562 07:22:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.562 07:22:34 -- setup/common.sh@33 -- # echo 0 00:03:30.562 07:22:34 -- setup/common.sh@33 -- # return 0 00:03:30.562 07:22:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.562 07:22:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.562 07:22:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.562 07:22:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.562 07:22:34 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:30.562 node0=512 expecting 512 00:03:30.562 07:22:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.562 07:22:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.562 07:22:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.562 07:22:34 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:30.562 node1=512 expecting 512 00:03:30.562 07:22:34 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:30.562 00:03:30.562 real 0m2.922s 00:03:30.562 user 0m1.226s 00:03:30.562 sys 0m1.768s 00:03:30.562 07:22:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.562 07:22:34 -- common/autotest_common.sh@10 -- # set +x 00:03:30.562 ************************************ 00:03:30.562 END TEST even_2G_alloc 00:03:30.562 ************************************ 00:03:30.562 07:22:34 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:30.562 07:22:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.562 07:22:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.562 07:22:34 -- common/autotest_common.sh@10 -- # set +x 00:03:30.562 ************************************ 00:03:30.562 START TEST odd_alloc 00:03:30.562 ************************************ 00:03:30.562 07:22:34 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:30.562 07:22:34 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:30.562 07:22:34 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:30.562 07:22:34 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.562 07:22:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.562 07:22:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:30.562 07:22:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.562 07:22:34 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.562 07:22:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.562 07:22:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:30.562 07:22:34 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.562 07:22:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.562 07:22:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.563 07:22:34 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.563 07:22:34 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:30.563 07:22:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.563 07:22:34 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:30.563 07:22:34 -- setup/hugepages.sh@83 -- # : 513 00:03:30.563 07:22:34 -- setup/hugepages.sh@84 -- # : 1 00:03:30.563 07:22:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.563 07:22:34 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:30.563 07:22:34 -- setup/hugepages.sh@83 -- # : 0 00:03:30.563 07:22:34 -- setup/hugepages.sh@84 -- # : 0 00:03:30.563 07:22:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.563 07:22:34 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:30.563 07:22:34 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:30.563 07:22:34 -- setup/hugepages.sh@160 -- # setup output 00:03:30.563 07:22:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.563 07:22:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.101 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:33.101 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.101 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.101 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.101 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.101 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.101 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.101 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.101 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.101 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.101 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.101 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.101 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.101 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.101 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.101 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.101 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.101 07:22:36 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:33.101 07:22:36 -- setup/hugepages.sh@89 -- # local node 00:03:33.101 07:22:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.101 07:22:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.101 07:22:36 -- setup/hugepages.sh@92 -- # local surp 00:03:33.101 07:22:36 -- setup/hugepages.sh@93 -- # local resv 00:03:33.101 07:22:36 -- setup/hugepages.sh@94 -- # local anon 00:03:33.101 07:22:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.101 07:22:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.101 07:22:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.101 07:22:36 -- setup/common.sh@18 -- # local node= 00:03:33.101 07:22:36 -- setup/common.sh@19 -- # local var val 00:03:33.101 07:22:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.101 07:22:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.101 07:22:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.101 07:22:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.101 07:22:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.101 07:22:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71568848 kB' 'MemAvailable: 75413144 kB' 'Buffers: 4136 kB' 'Cached: 16082576 kB' 'SwapCached: 0 kB' 'Active: 12767584 kB' 'Inactive: 3856048 kB' 'Active(anon): 12318140 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540204 kB' 'Mapped: 161156 kB' 'Shmem: 11781220 kB' 'KReclaimable: 269964 kB' 'Slab: 1153020 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 883056 kB' 'KernelStack: 19632 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53995452 kB' 'Committed_AS: 13629320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215340 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.101 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.101 07:22:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.102 07:22:36 -- setup/common.sh@33 -- # echo 0 00:03:33.102 07:22:36 -- setup/common.sh@33 -- # return 0 00:03:33.102 07:22:36 -- setup/hugepages.sh@97 -- # anon=0 00:03:33.102 07:22:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.102 07:22:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.102 07:22:36 -- setup/common.sh@18 -- # local node= 00:03:33.102 07:22:36 -- setup/common.sh@19 -- # local var val 00:03:33.102 07:22:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.102 07:22:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.102 07:22:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.102 07:22:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.102 07:22:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.102 07:22:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.102 07:22:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71569352 kB' 'MemAvailable: 75413648 kB' 'Buffers: 4136 kB' 'Cached: 16082580 kB' 'SwapCached: 0 kB' 'Active: 12767792 kB' 'Inactive: 3856048 kB' 'Active(anon): 12318348 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540452 kB' 'Mapped: 161024 kB' 'Shmem: 11781224 kB' 'KReclaimable: 269964 kB' 'Slab: 1152996 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 883032 kB' 'KernelStack: 19632 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53995452 kB' 'Committed_AS: 13629332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215324 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.102 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.102 07:22:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.103 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.103 07:22:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.103 07:22:36 -- setup/common.sh@33 -- # echo 0 00:03:33.103 07:22:36 -- setup/common.sh@33 -- # return 0 00:03:33.103 07:22:36 -- setup/hugepages.sh@99 -- # surp=0 00:03:33.103 07:22:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.103 07:22:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.103 07:22:36 -- setup/common.sh@18 -- # local node= 00:03:33.103 07:22:36 -- setup/common.sh@19 -- # local var val 00:03:33.104 07:22:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.104 07:22:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.104 07:22:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.104 07:22:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.104 07:22:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.104 07:22:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71569352 kB' 'MemAvailable: 75413648 kB' 'Buffers: 4136 kB' 'Cached: 16082592 kB' 'SwapCached: 0 kB' 'Active: 12767816 kB' 'Inactive: 3856048 kB' 'Active(anon): 12318372 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540424 kB' 'Mapped: 161024 kB' 'Shmem: 11781236 kB' 'KReclaimable: 269964 kB' 'Slab: 1152996 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 883032 kB' 'KernelStack: 19616 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53995452 kB' 'Committed_AS: 13629348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215324 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.104 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.104 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.105 07:22:36 -- setup/common.sh@33 -- # echo 0 00:03:33.105 07:22:36 -- setup/common.sh@33 -- # return 0 00:03:33.105 07:22:36 -- setup/hugepages.sh@100 -- # resv=0 00:03:33.105 07:22:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:33.105 nr_hugepages=1025 00:03:33.105 07:22:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.105 resv_hugepages=0 00:03:33.105 07:22:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.105 surplus_hugepages=0 00:03:33.105 07:22:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.105 anon_hugepages=0 00:03:33.105 07:22:36 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:33.105 07:22:36 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:33.105 07:22:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.105 07:22:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.105 07:22:36 -- setup/common.sh@18 -- # local node= 00:03:33.105 07:22:36 -- setup/common.sh@19 -- # local var val 00:03:33.105 07:22:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.105 07:22:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.105 07:22:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.105 07:22:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.105 07:22:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.105 07:22:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71569352 kB' 'MemAvailable: 75413648 kB' 'Buffers: 4136 kB' 'Cached: 16082592 kB' 'SwapCached: 0 kB' 'Active: 12767816 kB' 'Inactive: 3856048 kB' 'Active(anon): 12318372 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540424 kB' 'Mapped: 161024 kB' 'Shmem: 11781236 kB' 'KReclaimable: 269964 kB' 'Slab: 1152996 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 883032 kB' 'KernelStack: 19616 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53995452 kB' 'Committed_AS: 13629364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215340 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.105 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.105 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # continue 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.106 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.106 07:22:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.106 07:22:36 -- setup/common.sh@33 -- # echo 1025 00:03:33.106 07:22:36 -- setup/common.sh@33 -- # return 0 00:03:33.106 07:22:36 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:33.106 07:22:36 -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.106 07:22:36 -- setup/hugepages.sh@27 -- # local node 00:03:33.106 07:22:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.107 07:22:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:33.107 07:22:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.107 07:22:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:33.107 07:22:36 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.107 07:22:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.107 07:22:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.107 07:22:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.107 07:22:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.107 07:22:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.107 07:22:36 -- setup/common.sh@18 -- # local node=0 00:03:33.107 07:22:36 -- setup/common.sh@19 -- # local var val 00:03:33.107 07:22:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.107 07:22:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.107 07:22:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.107 07:22:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.107 07:22:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.107 07:22:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.107 07:22:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32630596 kB' 'MemFree: 18688844 kB' 'MemUsed: 13941752 kB' 'SwapCached: 0 kB' 'Active: 7592344 kB' 'Inactive: 3666000 kB' 'Active(anon): 7335684 kB' 'Inactive(anon): 0 kB' 'Active(file): 256660 kB' 'Inactive(file): 3666000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11074400 kB' 'Mapped: 124852 kB' 'AnonPages: 187076 kB' 'Shmem: 7151740 kB' 'KernelStack: 10920 kB' 'PageTables: 3348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 147068 kB' 'Slab: 630132 kB' 'SReclaimable: 147068 kB' 'SUnreclaim: 483064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.107 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.107 07:22:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@33 -- # echo 0 00:03:33.108 07:22:37 -- setup/common.sh@33 -- # return 0 00:03:33.108 07:22:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.108 07:22:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.108 07:22:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.108 07:22:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:33.108 07:22:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.108 07:22:37 -- setup/common.sh@18 -- # local node=1 00:03:33.108 07:22:37 -- setup/common.sh@19 -- # local var val 00:03:33.108 07:22:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.108 07:22:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.108 07:22:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:33.108 07:22:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:33.108 07:22:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.108 07:22:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682304 kB' 'MemFree: 52880780 kB' 'MemUsed: 7801524 kB' 'SwapCached: 0 kB' 'Active: 5175508 kB' 'Inactive: 190048 kB' 'Active(anon): 4982724 kB' 'Inactive(anon): 0 kB' 'Active(file): 192784 kB' 'Inactive(file): 190048 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5012356 kB' 'Mapped: 36172 kB' 'AnonPages: 353376 kB' 'Shmem: 4629524 kB' 'KernelStack: 8712 kB' 'PageTables: 4764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122896 kB' 'Slab: 522864 kB' 'SReclaimable: 122896 kB' 'SUnreclaim: 399968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.108 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.108 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.109 07:22:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.109 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.109 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.109 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.109 07:22:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.109 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.109 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.109 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.109 07:22:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.109 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.109 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.109 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.109 07:22:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.109 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.109 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.109 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.109 07:22:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.109 07:22:37 -- setup/common.sh@32 -- # continue 00:03:33.109 07:22:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.109 07:22:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.109 07:22:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.109 07:22:37 -- setup/common.sh@33 -- # echo 0 00:03:33.109 07:22:37 -- setup/common.sh@33 -- # return 0 00:03:33.109 07:22:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.109 07:22:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.109 07:22:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.109 07:22:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.109 07:22:37 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:33.109 node0=512 expecting 513 00:03:33.109 07:22:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.109 07:22:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.109 07:22:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.109 07:22:37 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:33.109 node1=513 expecting 512 00:03:33.109 07:22:37 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:33.109 00:03:33.109 real 0m2.903s 00:03:33.109 user 0m1.214s 00:03:33.109 sys 0m1.757s 00:03:33.109 07:22:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.109 07:22:37 -- common/autotest_common.sh@10 -- # set +x 00:03:33.109 ************************************ 00:03:33.109 END TEST odd_alloc 00:03:33.109 ************************************ 00:03:33.368 07:22:37 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:33.368 07:22:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:33.368 07:22:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:33.368 07:22:37 -- common/autotest_common.sh@10 -- # set +x 00:03:33.368 ************************************ 00:03:33.368 START TEST custom_alloc 00:03:33.368 ************************************ 00:03:33.368 07:22:37 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:33.368 07:22:37 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:33.368 07:22:37 -- setup/hugepages.sh@169 -- # local node 00:03:33.368 07:22:37 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:33.368 07:22:37 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:33.368 07:22:37 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:33.368 07:22:37 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:33.368 07:22:37 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:33.368 07:22:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:33.368 07:22:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:33.368 07:22:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:33.368 07:22:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.368 07:22:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:33.368 07:22:37 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.368 07:22:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.368 07:22:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.368 07:22:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:33.368 07:22:37 -- setup/hugepages.sh@83 -- # : 256 00:03:33.368 07:22:37 -- setup/hugepages.sh@84 -- # : 1 00:03:33.368 07:22:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:33.368 07:22:37 -- setup/hugepages.sh@83 -- # : 0 00:03:33.368 07:22:37 -- setup/hugepages.sh@84 -- # : 0 00:03:33.368 07:22:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:33.368 07:22:37 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:33.368 07:22:37 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:33.368 07:22:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:33.368 07:22:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:33.368 07:22:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:33.368 07:22:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.368 07:22:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:33.368 07:22:37 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.368 07:22:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.368 07:22:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.368 07:22:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:33.368 07:22:37 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:33.368 07:22:37 -- setup/hugepages.sh@78 -- # return 0 00:03:33.368 07:22:37 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:33.368 07:22:37 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:33.368 07:22:37 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:33.368 07:22:37 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:33.368 07:22:37 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:33.368 07:22:37 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:33.368 07:22:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:33.368 07:22:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.368 07:22:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:33.368 07:22:37 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.368 07:22:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.368 07:22:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.368 07:22:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:33.368 07:22:37 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:33.368 07:22:37 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:33.368 07:22:37 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:33.369 07:22:37 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:33.369 07:22:37 -- setup/hugepages.sh@78 -- # return 0 00:03:33.369 07:22:37 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:33.369 07:22:37 -- setup/hugepages.sh@187 -- # setup output 00:03:33.369 07:22:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.369 07:22:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.905 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.905 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:35.905 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.905 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.905 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.905 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.905 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.905 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.905 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.905 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.905 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.905 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.905 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.905 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.905 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.905 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.905 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.905 07:22:39 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:35.905 07:22:39 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:35.905 07:22:39 -- setup/hugepages.sh@89 -- # local node 00:03:35.905 07:22:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.905 07:22:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.905 07:22:39 -- setup/hugepages.sh@92 -- # local surp 00:03:35.905 07:22:39 -- setup/hugepages.sh@93 -- # local resv 00:03:35.905 07:22:39 -- setup/hugepages.sh@94 -- # local anon 00:03:35.905 07:22:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.905 07:22:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.905 07:22:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.905 07:22:39 -- setup/common.sh@18 -- # local node= 00:03:35.905 07:22:39 -- setup/common.sh@19 -- # local var val 00:03:35.905 07:22:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.905 07:22:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.905 07:22:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.905 07:22:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.905 07:22:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.905 07:22:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.905 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.905 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 70541596 kB' 'MemAvailable: 74385892 kB' 'Buffers: 4136 kB' 'Cached: 16082704 kB' 'SwapCached: 0 kB' 'Active: 12768684 kB' 'Inactive: 3856048 kB' 'Active(anon): 12319240 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541196 kB' 'Mapped: 161052 kB' 'Shmem: 11781348 kB' 'KReclaimable: 269964 kB' 'Slab: 1152952 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 882988 kB' 'KernelStack: 19648 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53472188 kB' 'Committed_AS: 13629964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215388 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.906 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.906 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.907 07:22:39 -- setup/common.sh@33 -- # echo 0 00:03:35.907 07:22:39 -- setup/common.sh@33 -- # return 0 00:03:35.907 07:22:39 -- setup/hugepages.sh@97 -- # anon=0 00:03:35.907 07:22:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.907 07:22:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.907 07:22:39 -- setup/common.sh@18 -- # local node= 00:03:35.907 07:22:39 -- setup/common.sh@19 -- # local var val 00:03:35.907 07:22:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.907 07:22:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.907 07:22:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.907 07:22:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.907 07:22:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.907 07:22:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 70541956 kB' 'MemAvailable: 74386252 kB' 'Buffers: 4136 kB' 'Cached: 16082708 kB' 'SwapCached: 0 kB' 'Active: 12768468 kB' 'Inactive: 3856048 kB' 'Active(anon): 12319024 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540932 kB' 'Mapped: 161032 kB' 'Shmem: 11781352 kB' 'KReclaimable: 269964 kB' 'Slab: 1152916 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 882952 kB' 'KernelStack: 19632 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53472188 kB' 'Committed_AS: 13629976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215356 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.907 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.907 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.908 07:22:39 -- setup/common.sh@33 -- # echo 0 00:03:35.908 07:22:39 -- setup/common.sh@33 -- # return 0 00:03:35.908 07:22:39 -- setup/hugepages.sh@99 -- # surp=0 00:03:35.908 07:22:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.908 07:22:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.908 07:22:39 -- setup/common.sh@18 -- # local node= 00:03:35.908 07:22:39 -- setup/common.sh@19 -- # local var val 00:03:35.908 07:22:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.908 07:22:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.908 07:22:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.908 07:22:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.908 07:22:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.908 07:22:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 70542176 kB' 'MemAvailable: 74386472 kB' 'Buffers: 4136 kB' 'Cached: 16082720 kB' 'SwapCached: 0 kB' 'Active: 12768456 kB' 'Inactive: 3856048 kB' 'Active(anon): 12319012 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540900 kB' 'Mapped: 161032 kB' 'Shmem: 11781364 kB' 'KReclaimable: 269964 kB' 'Slab: 1152980 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 883016 kB' 'KernelStack: 19632 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53472188 kB' 'Committed_AS: 13629992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215372 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.908 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.908 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.909 07:22:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.909 07:22:39 -- setup/common.sh@33 -- # echo 0 00:03:35.909 07:22:39 -- setup/common.sh@33 -- # return 0 00:03:35.909 07:22:39 -- setup/hugepages.sh@100 -- # resv=0 00:03:35.909 07:22:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:35.909 nr_hugepages=1536 00:03:35.909 07:22:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.909 resv_hugepages=0 00:03:35.909 07:22:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.909 surplus_hugepages=0 00:03:35.909 07:22:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.909 anon_hugepages=0 00:03:35.909 07:22:39 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:35.909 07:22:39 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:35.909 07:22:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.909 07:22:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.909 07:22:39 -- setup/common.sh@18 -- # local node= 00:03:35.909 07:22:39 -- setup/common.sh@19 -- # local var val 00:03:35.909 07:22:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.909 07:22:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.909 07:22:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.909 07:22:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.909 07:22:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.909 07:22:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.909 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 07:22:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 70542604 kB' 'MemAvailable: 74386900 kB' 'Buffers: 4136 kB' 'Cached: 16082732 kB' 'SwapCached: 0 kB' 'Active: 12768456 kB' 'Inactive: 3856048 kB' 'Active(anon): 12319012 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540900 kB' 'Mapped: 161032 kB' 'Shmem: 11781376 kB' 'KReclaimable: 269964 kB' 'Slab: 1152980 kB' 'SReclaimable: 269964 kB' 'SUnreclaim: 883016 kB' 'KernelStack: 19632 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53472188 kB' 'Committed_AS: 13630008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215372 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:35.910 07:22:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.910 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 07:22:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.910 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 07:22:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.910 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 07:22:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.910 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 07:22:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.910 07:22:39 -- setup/common.sh@32 -- # continue 00:03:35.910 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.910 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.910 07:22:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.170 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.170 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.171 07:22:39 -- setup/common.sh@33 -- # echo 1536 00:03:36.171 07:22:39 -- setup/common.sh@33 -- # return 0 00:03:36.171 07:22:39 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:36.171 07:22:39 -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.171 07:22:39 -- setup/hugepages.sh@27 -- # local node 00:03:36.171 07:22:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.171 07:22:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.171 07:22:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.171 07:22:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.171 07:22:39 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.171 07:22:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.171 07:22:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.171 07:22:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.171 07:22:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.171 07:22:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.171 07:22:39 -- setup/common.sh@18 -- # local node=0 00:03:36.171 07:22:39 -- setup/common.sh@19 -- # local var val 00:03:36.171 07:22:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.171 07:22:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.171 07:22:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.171 07:22:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.171 07:22:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.171 07:22:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.171 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.171 07:22:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32630596 kB' 'MemFree: 18682488 kB' 'MemUsed: 13948108 kB' 'SwapCached: 0 kB' 'Active: 7593112 kB' 'Inactive: 3666000 kB' 'Active(anon): 7336452 kB' 'Inactive(anon): 0 kB' 'Active(file): 256660 kB' 'Inactive(file): 3666000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11074484 kB' 'Mapped: 124860 kB' 'AnonPages: 187780 kB' 'Shmem: 7151824 kB' 'KernelStack: 10952 kB' 'PageTables: 3488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 147068 kB' 'Slab: 630228 kB' 'SReclaimable: 147068 kB' 'SUnreclaim: 483160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.171 07:22:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.172 07:22:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.172 07:22:39 -- setup/common.sh@33 -- # echo 0 00:03:36.172 07:22:39 -- setup/common.sh@33 -- # return 0 00:03:36.172 07:22:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.172 07:22:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.172 07:22:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.172 07:22:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:36.172 07:22:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.172 07:22:39 -- setup/common.sh@18 -- # local node=1 00:03:36.172 07:22:39 -- setup/common.sh@19 -- # local var val 00:03:36.172 07:22:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.172 07:22:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.172 07:22:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:36.172 07:22:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:36.172 07:22:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.172 07:22:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.172 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682304 kB' 'MemFree: 51859864 kB' 'MemUsed: 8822440 kB' 'SwapCached: 0 kB' 'Active: 5175544 kB' 'Inactive: 190048 kB' 'Active(anon): 4982760 kB' 'Inactive(anon): 0 kB' 'Active(file): 192784 kB' 'Inactive(file): 190048 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5012396 kB' 'Mapped: 36172 kB' 'AnonPages: 353284 kB' 'Shmem: 4629564 kB' 'KernelStack: 8680 kB' 'PageTables: 4628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122896 kB' 'Slab: 522744 kB' 'SReclaimable: 122896 kB' 'SUnreclaim: 399848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # continue 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.173 07:22:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.173 07:22:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.173 07:22:39 -- setup/common.sh@33 -- # echo 0 00:03:36.173 07:22:39 -- setup/common.sh@33 -- # return 0 00:03:36.173 07:22:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.173 07:22:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.173 07:22:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.173 07:22:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.173 07:22:39 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:36.173 node0=512 expecting 512 00:03:36.174 07:22:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.174 07:22:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.174 07:22:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.174 07:22:39 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:36.174 node1=1024 expecting 1024 00:03:36.174 07:22:39 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:36.174 00:03:36.174 real 0m2.848s 00:03:36.174 user 0m1.181s 00:03:36.174 sys 0m1.725s 00:03:36.174 07:22:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.174 07:22:39 -- common/autotest_common.sh@10 -- # set +x 00:03:36.174 ************************************ 00:03:36.174 END TEST custom_alloc 00:03:36.174 ************************************ 00:03:36.174 07:22:39 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:36.174 07:22:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:36.174 07:22:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:36.174 07:22:39 -- common/autotest_common.sh@10 -- # set +x 00:03:36.174 ************************************ 00:03:36.174 START TEST no_shrink_alloc 00:03:36.174 ************************************ 00:03:36.174 07:22:39 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:36.174 07:22:39 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:36.174 07:22:39 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.174 07:22:39 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:36.174 07:22:39 -- setup/hugepages.sh@51 -- # shift 00:03:36.174 07:22:39 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:36.174 07:22:39 -- setup/hugepages.sh@52 -- # local node_ids 00:03:36.174 07:22:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.174 07:22:39 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.174 07:22:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:36.174 07:22:39 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:36.174 07:22:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.174 07:22:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.174 07:22:39 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.174 07:22:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.174 07:22:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.174 07:22:39 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:36.174 07:22:39 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:36.174 07:22:39 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:36.174 07:22:39 -- setup/hugepages.sh@73 -- # return 0 00:03:36.174 07:22:39 -- setup/hugepages.sh@198 -- # setup output 00:03:36.174 07:22:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.174 07:22:39 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.708 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.708 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:38.708 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.708 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.708 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.708 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.708 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.708 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.708 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.708 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.708 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.708 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.708 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.708 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.708 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.708 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.708 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.708 07:22:42 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:38.708 07:22:42 -- setup/hugepages.sh@89 -- # local node 00:03:38.708 07:22:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.708 07:22:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.708 07:22:42 -- setup/hugepages.sh@92 -- # local surp 00:03:38.708 07:22:42 -- setup/hugepages.sh@93 -- # local resv 00:03:38.708 07:22:42 -- setup/hugepages.sh@94 -- # local anon 00:03:38.708 07:22:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.708 07:22:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.708 07:22:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.708 07:22:42 -- setup/common.sh@18 -- # local node= 00:03:38.708 07:22:42 -- setup/common.sh@19 -- # local var val 00:03:38.709 07:22:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.709 07:22:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.709 07:22:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.709 07:22:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.709 07:22:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.709 07:22:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71589696 kB' 'MemAvailable: 75433944 kB' 'Buffers: 4136 kB' 'Cached: 16082812 kB' 'SwapCached: 0 kB' 'Active: 12768428 kB' 'Inactive: 3856048 kB' 'Active(anon): 12318984 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540248 kB' 'Mapped: 161172 kB' 'Shmem: 11781456 kB' 'KReclaimable: 269868 kB' 'Slab: 1152480 kB' 'SReclaimable: 269868 kB' 'SUnreclaim: 882612 kB' 'KernelStack: 19680 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13630460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215324 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.709 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.709 07:22:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.710 07:22:42 -- setup/common.sh@33 -- # echo 0 00:03:38.710 07:22:42 -- setup/common.sh@33 -- # return 0 00:03:38.710 07:22:42 -- setup/hugepages.sh@97 -- # anon=0 00:03:38.710 07:22:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.710 07:22:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.710 07:22:42 -- setup/common.sh@18 -- # local node= 00:03:38.710 07:22:42 -- setup/common.sh@19 -- # local var val 00:03:38.710 07:22:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.710 07:22:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.710 07:22:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.710 07:22:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.710 07:22:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.710 07:22:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71589700 kB' 'MemAvailable: 75433948 kB' 'Buffers: 4136 kB' 'Cached: 16082816 kB' 'SwapCached: 0 kB' 'Active: 12768420 kB' 'Inactive: 3856048 kB' 'Active(anon): 12318976 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540288 kB' 'Mapped: 161116 kB' 'Shmem: 11781460 kB' 'KReclaimable: 269868 kB' 'Slab: 1152480 kB' 'SReclaimable: 269868 kB' 'SUnreclaim: 882612 kB' 'KernelStack: 19632 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13630472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215308 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.710 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.710 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.711 07:22:42 -- setup/common.sh@33 -- # echo 0 00:03:38.711 07:22:42 -- setup/common.sh@33 -- # return 0 00:03:38.711 07:22:42 -- setup/hugepages.sh@99 -- # surp=0 00:03:38.711 07:22:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.711 07:22:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.711 07:22:42 -- setup/common.sh@18 -- # local node= 00:03:38.711 07:22:42 -- setup/common.sh@19 -- # local var val 00:03:38.711 07:22:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.711 07:22:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.711 07:22:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.711 07:22:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.711 07:22:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.711 07:22:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71589328 kB' 'MemAvailable: 75433576 kB' 'Buffers: 4136 kB' 'Cached: 16082828 kB' 'SwapCached: 0 kB' 'Active: 12767944 kB' 'Inactive: 3856048 kB' 'Active(anon): 12318500 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540272 kB' 'Mapped: 161040 kB' 'Shmem: 11781472 kB' 'KReclaimable: 269868 kB' 'Slab: 1152428 kB' 'SReclaimable: 269868 kB' 'SUnreclaim: 882560 kB' 'KernelStack: 19632 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13630488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215308 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.711 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.711 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.712 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.712 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.973 07:22:42 -- setup/common.sh@33 -- # echo 0 00:03:38.973 07:22:42 -- setup/common.sh@33 -- # return 0 00:03:38.973 07:22:42 -- setup/hugepages.sh@100 -- # resv=0 00:03:38.973 07:22:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.973 nr_hugepages=1024 00:03:38.973 07:22:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.973 resv_hugepages=0 00:03:38.973 07:22:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.973 surplus_hugepages=0 00:03:38.973 07:22:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.973 anon_hugepages=0 00:03:38.973 07:22:42 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.973 07:22:42 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.973 07:22:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.973 07:22:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.973 07:22:42 -- setup/common.sh@18 -- # local node= 00:03:38.973 07:22:42 -- setup/common.sh@19 -- # local var val 00:03:38.973 07:22:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.973 07:22:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.973 07:22:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.973 07:22:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.973 07:22:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.973 07:22:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71589980 kB' 'MemAvailable: 75434228 kB' 'Buffers: 4136 kB' 'Cached: 16082840 kB' 'SwapCached: 0 kB' 'Active: 12767948 kB' 'Inactive: 3856048 kB' 'Active(anon): 12318504 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540268 kB' 'Mapped: 161040 kB' 'Shmem: 11781484 kB' 'KReclaimable: 269868 kB' 'Slab: 1152428 kB' 'SReclaimable: 269868 kB' 'SUnreclaim: 882560 kB' 'KernelStack: 19632 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13630504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215308 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.973 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.973 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.974 07:22:42 -- setup/common.sh@33 -- # echo 1024 00:03:38.974 07:22:42 -- setup/common.sh@33 -- # return 0 00:03:38.974 07:22:42 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.974 07:22:42 -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.974 07:22:42 -- setup/hugepages.sh@27 -- # local node 00:03:38.974 07:22:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.974 07:22:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:38.974 07:22:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.974 07:22:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:38.974 07:22:42 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.974 07:22:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.974 07:22:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.974 07:22:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.974 07:22:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.974 07:22:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.974 07:22:42 -- setup/common.sh@18 -- # local node=0 00:03:38.974 07:22:42 -- setup/common.sh@19 -- # local var val 00:03:38.974 07:22:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.974 07:22:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.974 07:22:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.974 07:22:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.974 07:22:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.974 07:22:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32630596 kB' 'MemFree: 17646868 kB' 'MemUsed: 14983728 kB' 'SwapCached: 0 kB' 'Active: 7592820 kB' 'Inactive: 3666000 kB' 'Active(anon): 7336160 kB' 'Inactive(anon): 0 kB' 'Active(file): 256660 kB' 'Inactive(file): 3666000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11074580 kB' 'Mapped: 125372 kB' 'AnonPages: 187868 kB' 'Shmem: 7151920 kB' 'KernelStack: 10952 kB' 'PageTables: 3448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 147068 kB' 'Slab: 629836 kB' 'SReclaimable: 147068 kB' 'SUnreclaim: 482768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.974 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.974 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # continue 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.975 07:22:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.975 07:22:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.975 07:22:42 -- setup/common.sh@33 -- # echo 0 00:03:38.975 07:22:42 -- setup/common.sh@33 -- # return 0 00:03:38.975 07:22:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.975 07:22:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.975 07:22:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.975 07:22:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.975 07:22:42 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:38.975 node0=1024 expecting 1024 00:03:38.975 07:22:42 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:38.975 07:22:42 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:38.975 07:22:42 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:38.975 07:22:42 -- setup/hugepages.sh@202 -- # setup output 00:03:38.975 07:22:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.975 07:22:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.514 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.514 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:41.514 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.514 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.514 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.514 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.514 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.514 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.514 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.514 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.514 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.514 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.514 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.514 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.514 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.514 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.514 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.775 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:41.775 07:22:45 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:41.775 07:22:45 -- setup/hugepages.sh@89 -- # local node 00:03:41.775 07:22:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.775 07:22:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.775 07:22:45 -- setup/hugepages.sh@92 -- # local surp 00:03:41.775 07:22:45 -- setup/hugepages.sh@93 -- # local resv 00:03:41.775 07:22:45 -- setup/hugepages.sh@94 -- # local anon 00:03:41.775 07:22:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.775 07:22:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.775 07:22:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.775 07:22:45 -- setup/common.sh@18 -- # local node= 00:03:41.775 07:22:45 -- setup/common.sh@19 -- # local var val 00:03:41.775 07:22:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.775 07:22:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.775 07:22:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.775 07:22:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.775 07:22:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.775 07:22:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.775 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71619992 kB' 'MemAvailable: 75464240 kB' 'Buffers: 4136 kB' 'Cached: 16082924 kB' 'SwapCached: 0 kB' 'Active: 12770012 kB' 'Inactive: 3856048 kB' 'Active(anon): 12320568 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541824 kB' 'Mapped: 161252 kB' 'Shmem: 11781568 kB' 'KReclaimable: 269868 kB' 'Slab: 1152288 kB' 'SReclaimable: 269868 kB' 'SUnreclaim: 882420 kB' 'KernelStack: 19648 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13631084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215292 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 07:22:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.777 07:22:45 -- setup/common.sh@33 -- # echo 0 00:03:41.777 07:22:45 -- setup/common.sh@33 -- # return 0 00:03:41.777 07:22:45 -- setup/hugepages.sh@97 -- # anon=0 00:03:41.777 07:22:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.777 07:22:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.777 07:22:45 -- setup/common.sh@18 -- # local node= 00:03:41.777 07:22:45 -- setup/common.sh@19 -- # local var val 00:03:41.777 07:22:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.777 07:22:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.777 07:22:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.777 07:22:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.777 07:22:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.777 07:22:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71619740 kB' 'MemAvailable: 75463988 kB' 'Buffers: 4136 kB' 'Cached: 16082928 kB' 'SwapCached: 0 kB' 'Active: 12770320 kB' 'Inactive: 3856048 kB' 'Active(anon): 12320876 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542188 kB' 'Mapped: 161252 kB' 'Shmem: 11781572 kB' 'KReclaimable: 269868 kB' 'Slab: 1152288 kB' 'SReclaimable: 269868 kB' 'SUnreclaim: 882420 kB' 'KernelStack: 19648 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13631096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215292 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.777 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.777 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.778 07:22:45 -- setup/common.sh@33 -- # echo 0 00:03:41.778 07:22:45 -- setup/common.sh@33 -- # return 0 00:03:41.778 07:22:45 -- setup/hugepages.sh@99 -- # surp=0 00:03:41.778 07:22:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.778 07:22:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.778 07:22:45 -- setup/common.sh@18 -- # local node= 00:03:41.778 07:22:45 -- setup/common.sh@19 -- # local var val 00:03:41.778 07:22:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.778 07:22:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.778 07:22:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.778 07:22:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.778 07:22:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.778 07:22:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71620448 kB' 'MemAvailable: 75464696 kB' 'Buffers: 4136 kB' 'Cached: 16082940 kB' 'SwapCached: 0 kB' 'Active: 12769128 kB' 'Inactive: 3856048 kB' 'Active(anon): 12319684 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541388 kB' 'Mapped: 161044 kB' 'Shmem: 11781584 kB' 'KReclaimable: 269868 kB' 'Slab: 1152376 kB' 'SReclaimable: 269868 kB' 'SUnreclaim: 882508 kB' 'KernelStack: 19632 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13631112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215308 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.778 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.778 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.779 07:22:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.779 07:22:45 -- setup/common.sh@33 -- # echo 0 00:03:41.779 07:22:45 -- setup/common.sh@33 -- # return 0 00:03:41.779 07:22:45 -- setup/hugepages.sh@100 -- # resv=0 00:03:41.779 07:22:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:41.779 nr_hugepages=1024 00:03:41.779 07:22:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.779 resv_hugepages=0 00:03:41.779 07:22:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.779 surplus_hugepages=0 00:03:41.779 07:22:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.779 anon_hugepages=0 00:03:41.779 07:22:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.779 07:22:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:41.779 07:22:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.779 07:22:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.779 07:22:45 -- setup/common.sh@18 -- # local node= 00:03:41.779 07:22:45 -- setup/common.sh@19 -- # local var val 00:03:41.779 07:22:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.779 07:22:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.779 07:22:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.779 07:22:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.779 07:22:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.779 07:22:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.779 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93312900 kB' 'MemFree: 71620412 kB' 'MemAvailable: 75464660 kB' 'Buffers: 4136 kB' 'Cached: 16082952 kB' 'SwapCached: 0 kB' 'Active: 12769148 kB' 'Inactive: 3856048 kB' 'Active(anon): 12319704 kB' 'Inactive(anon): 0 kB' 'Active(file): 449444 kB' 'Inactive(file): 3856048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541380 kB' 'Mapped: 161044 kB' 'Shmem: 11781596 kB' 'KReclaimable: 269868 kB' 'Slab: 1152376 kB' 'SReclaimable: 269868 kB' 'SUnreclaim: 882508 kB' 'KernelStack: 19632 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996476 kB' 'Committed_AS: 13631128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215308 kB' 'VmallocChunk: 0 kB' 'Percpu: 78336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3269588 kB' 'DirectMap2M: 17381376 kB' 'DirectMap1G: 82837504 kB' 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.780 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.780 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.781 07:22:45 -- setup/common.sh@33 -- # echo 1024 00:03:41.781 07:22:45 -- setup/common.sh@33 -- # return 0 00:03:41.781 07:22:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.781 07:22:45 -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.781 07:22:45 -- setup/hugepages.sh@27 -- # local node 00:03:41.781 07:22:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.781 07:22:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:41.781 07:22:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.781 07:22:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:41.781 07:22:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:41.781 07:22:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.781 07:22:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.781 07:22:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.781 07:22:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.781 07:22:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.781 07:22:45 -- setup/common.sh@18 -- # local node=0 00:03:41.781 07:22:45 -- setup/common.sh@19 -- # local var val 00:03:41.781 07:22:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.781 07:22:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.781 07:22:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.781 07:22:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.781 07:22:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.781 07:22:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32630596 kB' 'MemFree: 17667648 kB' 'MemUsed: 14962948 kB' 'SwapCached: 0 kB' 'Active: 7593976 kB' 'Inactive: 3666000 kB' 'Active(anon): 7337316 kB' 'Inactive(anon): 0 kB' 'Active(file): 256660 kB' 'Inactive(file): 3666000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11074688 kB' 'Mapped: 124872 kB' 'AnonPages: 188560 kB' 'Shmem: 7152028 kB' 'KernelStack: 10984 kB' 'PageTables: 3548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 147068 kB' 'Slab: 629708 kB' 'SReclaimable: 147068 kB' 'SUnreclaim: 482640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.781 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.781 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.041 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.041 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # continue 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.042 07:22:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.042 07:22:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.042 07:22:45 -- setup/common.sh@33 -- # echo 0 00:03:42.042 07:22:45 -- setup/common.sh@33 -- # return 0 00:03:42.042 07:22:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.042 07:22:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.042 07:22:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.042 07:22:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.042 07:22:45 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:42.042 node0=1024 expecting 1024 00:03:42.042 07:22:45 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:42.042 00:03:42.042 real 0m5.794s 00:03:42.042 user 0m2.375s 00:03:42.042 sys 0m3.543s 00:03:42.042 07:22:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.042 07:22:45 -- common/autotest_common.sh@10 -- # set +x 00:03:42.042 ************************************ 00:03:42.042 END TEST no_shrink_alloc 00:03:42.042 ************************************ 00:03:42.042 07:22:45 -- setup/hugepages.sh@217 -- # clear_hp 00:03:42.042 07:22:45 -- setup/hugepages.sh@37 -- # local node hp 00:03:42.042 07:22:45 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.042 07:22:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.042 07:22:45 -- setup/hugepages.sh@41 -- # echo 0 00:03:42.042 07:22:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.042 07:22:45 -- setup/hugepages.sh@41 -- # echo 0 00:03:42.042 07:22:45 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.042 07:22:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.042 07:22:45 -- setup/hugepages.sh@41 -- # echo 0 00:03:42.042 07:22:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.042 07:22:45 -- setup/hugepages.sh@41 -- # echo 0 00:03:42.042 07:22:45 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:42.042 07:22:45 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:42.042 00:03:42.042 real 0m21.612s 00:03:42.042 user 0m8.606s 00:03:42.042 sys 0m12.716s 00:03:42.042 07:22:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.042 07:22:45 -- common/autotest_common.sh@10 -- # set +x 00:03:42.042 ************************************ 00:03:42.042 END TEST hugepages 00:03:42.042 ************************************ 00:03:42.042 07:22:45 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:42.042 07:22:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:42.042 07:22:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.042 07:22:45 -- common/autotest_common.sh@10 -- # set +x 00:03:42.042 ************************************ 00:03:42.042 START TEST driver 00:03:42.042 ************************************ 00:03:42.042 07:22:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:42.042 * Looking for test storage... 00:03:42.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:42.043 07:22:45 -- setup/driver.sh@68 -- # setup reset 00:03:42.043 07:22:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.043 07:22:45 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.247 07:22:49 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:46.247 07:22:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:46.247 07:22:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:46.247 07:22:49 -- common/autotest_common.sh@10 -- # set +x 00:03:46.247 ************************************ 00:03:46.247 START TEST guess_driver 00:03:46.247 ************************************ 00:03:46.247 07:22:49 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:46.247 07:22:49 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:46.247 07:22:49 -- setup/driver.sh@47 -- # local fail=0 00:03:46.247 07:22:49 -- setup/driver.sh@49 -- # pick_driver 00:03:46.247 07:22:49 -- setup/driver.sh@36 -- # vfio 00:03:46.247 07:22:49 -- setup/driver.sh@21 -- # local iommu_grups 00:03:46.247 07:22:49 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:46.247 07:22:49 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:46.247 07:22:49 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:46.247 07:22:49 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:46.247 07:22:49 -- setup/driver.sh@29 -- # (( 171 > 0 )) 00:03:46.247 07:22:49 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:46.247 07:22:49 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:46.247 07:22:49 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:46.247 07:22:49 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:46.247 07:22:49 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:46.247 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:46.247 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:46.247 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:46.247 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:46.247 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:46.247 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:46.247 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:46.247 07:22:49 -- setup/driver.sh@30 -- # return 0 00:03:46.247 07:22:49 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:46.247 07:22:49 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:46.247 07:22:49 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:46.247 07:22:49 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:46.247 Looking for driver=vfio-pci 00:03:46.247 07:22:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.247 07:22:49 -- setup/driver.sh@45 -- # setup output config 00:03:46.247 07:22:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.247 07:22:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.786 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.786 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.786 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.786 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.786 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.786 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.786 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.786 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.786 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.786 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.786 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.786 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.786 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.786 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.786 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.786 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.786 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.786 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.786 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.786 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.786 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.786 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.786 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.786 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.787 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.787 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.787 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.787 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.787 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.787 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.787 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.787 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.787 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.787 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.787 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.787 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.787 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.787 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.787 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.787 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.787 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.787 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.787 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.787 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.787 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.787 07:22:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.787 07:22:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.787 07:22:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.726 07:22:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.726 07:22:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:49.726 07:22:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.726 07:22:53 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:49.726 07:22:53 -- setup/driver.sh@65 -- # setup reset 00:03:49.726 07:22:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.726 07:22:53 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.924 00:03:53.924 real 0m7.073s 00:03:53.924 user 0m1.824s 00:03:53.924 sys 0m3.663s 00:03:53.924 07:22:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.924 07:22:57 -- common/autotest_common.sh@10 -- # set +x 00:03:53.924 ************************************ 00:03:53.924 END TEST guess_driver 00:03:53.924 ************************************ 00:03:53.924 00:03:53.924 real 0m11.221s 00:03:53.924 user 0m3.045s 00:03:53.924 sys 0m5.881s 00:03:53.924 07:22:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.924 07:22:57 -- common/autotest_common.sh@10 -- # set +x 00:03:53.924 ************************************ 00:03:53.924 END TEST driver 00:03:53.924 ************************************ 00:03:53.924 07:22:57 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:53.924 07:22:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:53.924 07:22:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:53.924 07:22:57 -- common/autotest_common.sh@10 -- # set +x 00:03:53.924 ************************************ 00:03:53.924 START TEST devices 00:03:53.924 ************************************ 00:03:53.924 07:22:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:53.924 * Looking for test storage... 00:03:53.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:53.924 07:22:57 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:53.924 07:22:57 -- setup/devices.sh@192 -- # setup reset 00:03:53.924 07:22:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.924 07:22:57 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.462 07:23:00 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:56.463 07:23:00 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:56.463 07:23:00 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:56.463 07:23:00 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:56.463 07:23:00 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:56.463 07:23:00 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:56.463 07:23:00 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:56.463 07:23:00 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.463 07:23:00 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:56.463 07:23:00 -- setup/devices.sh@196 -- # blocks=() 00:03:56.463 07:23:00 -- setup/devices.sh@196 -- # declare -a blocks 00:03:56.463 07:23:00 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:56.463 07:23:00 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:56.463 07:23:00 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:56.463 07:23:00 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:56.463 07:23:00 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:56.463 07:23:00 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:56.463 07:23:00 -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:56.463 07:23:00 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:56.463 07:23:00 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:56.463 07:23:00 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:56.463 07:23:00 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:56.463 No valid GPT data, bailing 00:03:56.463 07:23:00 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:56.463 07:23:00 -- scripts/common.sh@393 -- # pt= 00:03:56.463 07:23:00 -- scripts/common.sh@394 -- # return 1 00:03:56.463 07:23:00 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:56.463 07:23:00 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:56.463 07:23:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:56.463 07:23:00 -- setup/common.sh@80 -- # echo 1000204886016 00:03:56.463 07:23:00 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:56.463 07:23:00 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:56.463 07:23:00 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:56.463 07:23:00 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:56.463 07:23:00 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:56.463 07:23:00 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:56.463 07:23:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:56.463 07:23:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:56.463 07:23:00 -- common/autotest_common.sh@10 -- # set +x 00:03:56.463 ************************************ 00:03:56.463 START TEST nvme_mount 00:03:56.463 ************************************ 00:03:56.463 07:23:00 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:56.463 07:23:00 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:56.463 07:23:00 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:56.463 07:23:00 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.463 07:23:00 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.463 07:23:00 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:56.463 07:23:00 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:56.463 07:23:00 -- setup/common.sh@40 -- # local part_no=1 00:03:56.463 07:23:00 -- setup/common.sh@41 -- # local size=1073741824 00:03:56.463 07:23:00 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:56.463 07:23:00 -- setup/common.sh@44 -- # parts=() 00:03:56.463 07:23:00 -- setup/common.sh@44 -- # local parts 00:03:56.463 07:23:00 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:56.463 07:23:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.463 07:23:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.463 07:23:00 -- setup/common.sh@46 -- # (( part++ )) 00:03:56.463 07:23:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.463 07:23:00 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:56.463 07:23:00 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:56.463 07:23:00 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:57.841 Creating new GPT entries in memory. 00:03:57.841 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:57.841 other utilities. 00:03:57.841 07:23:01 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:57.841 07:23:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.841 07:23:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.841 07:23:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.841 07:23:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:58.778 Creating new GPT entries in memory. 00:03:58.778 The operation has completed successfully. 00:03:58.778 07:23:02 -- setup/common.sh@57 -- # (( part++ )) 00:03:58.778 07:23:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.778 07:23:02 -- setup/common.sh@62 -- # wait 3924903 00:03:58.778 07:23:02 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.778 07:23:02 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:58.778 07:23:02 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.778 07:23:02 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:58.778 07:23:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:58.778 07:23:02 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.778 07:23:02 -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.778 07:23:02 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:58.778 07:23:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:58.778 07:23:02 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.778 07:23:02 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.778 07:23:02 -- setup/devices.sh@53 -- # local found=0 00:03:58.778 07:23:02 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.778 07:23:02 -- setup/devices.sh@56 -- # : 00:03:58.778 07:23:02 -- setup/devices.sh@59 -- # local pci status 00:03:58.778 07:23:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.778 07:23:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:58.778 07:23:02 -- setup/devices.sh@47 -- # setup output config 00:03:58.778 07:23:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.778 07:23:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.756 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.756 07:23:04 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:00.756 07:23:04 -- setup/devices.sh@63 -- # found=1 00:04:00.756 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.756 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.756 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.756 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.756 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.756 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.756 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.756 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.756 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.756 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.756 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.757 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.757 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.757 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.757 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.757 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.757 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.757 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.757 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.757 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.757 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.757 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.757 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.757 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.757 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.757 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.757 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.757 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.757 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.757 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.757 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.757 07:23:04 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.757 07:23:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.017 07:23:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.017 07:23:04 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:01.017 07:23:04 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.017 07:23:04 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.017 07:23:04 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.017 07:23:04 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:01.017 07:23:04 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.017 07:23:04 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.017 07:23:04 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.017 07:23:04 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:01.017 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.017 07:23:04 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.017 07:23:04 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:01.276 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:01.276 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:01.276 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:01.276 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:01.276 07:23:05 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:01.276 07:23:05 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:01.276 07:23:05 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.276 07:23:05 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:01.276 07:23:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:01.276 07:23:05 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.276 07:23:05 -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.276 07:23:05 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:01.276 07:23:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:01.276 07:23:05 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.276 07:23:05 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.276 07:23:05 -- setup/devices.sh@53 -- # local found=0 00:04:01.276 07:23:05 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.276 07:23:05 -- setup/devices.sh@56 -- # : 00:04:01.276 07:23:05 -- setup/devices.sh@59 -- # local pci status 00:04:01.276 07:23:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.276 07:23:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:01.276 07:23:05 -- setup/devices.sh@47 -- # setup output config 00:04:01.276 07:23:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.276 07:23:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.812 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.812 07:23:07 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:03.812 07:23:07 -- setup/devices.sh@63 -- # found=1 00:04:03.812 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.812 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.812 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.812 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.812 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.812 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.812 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.812 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.812 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.812 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.813 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.813 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.813 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.813 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.813 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.813 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.813 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.813 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.813 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.813 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.813 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.813 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.813 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.813 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.813 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.813 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.813 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.813 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.813 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.813 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.813 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.813 07:23:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.813 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.072 07:23:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:04.072 07:23:07 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:04.072 07:23:07 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.072 07:23:07 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:04.072 07:23:07 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.072 07:23:07 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.072 07:23:07 -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:04.072 07:23:07 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:04.072 07:23:07 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:04.072 07:23:07 -- setup/devices.sh@50 -- # local mount_point= 00:04:04.072 07:23:07 -- setup/devices.sh@51 -- # local test_file= 00:04:04.072 07:23:07 -- setup/devices.sh@53 -- # local found=0 00:04:04.072 07:23:07 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:04.072 07:23:07 -- setup/devices.sh@59 -- # local pci status 00:04:04.072 07:23:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:04.072 07:23:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.072 07:23:07 -- setup/devices.sh@47 -- # setup output config 00:04:04.072 07:23:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.072 07:23:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:06.611 07:23:10 -- setup/devices.sh@63 -- # found=1 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.611 07:23:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.611 07:23:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.871 07:23:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.871 07:23:10 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:06.871 07:23:10 -- setup/devices.sh@68 -- # return 0 00:04:06.871 07:23:10 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:06.871 07:23:10 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.871 07:23:10 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.871 07:23:10 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.871 07:23:10 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:06.871 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:06.871 00:04:06.871 real 0m10.240s 00:04:06.871 user 0m2.897s 00:04:06.871 sys 0m4.961s 00:04:06.871 07:23:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.871 07:23:10 -- common/autotest_common.sh@10 -- # set +x 00:04:06.871 ************************************ 00:04:06.871 END TEST nvme_mount 00:04:06.871 ************************************ 00:04:06.871 07:23:10 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:06.871 07:23:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:06.871 07:23:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:06.871 07:23:10 -- common/autotest_common.sh@10 -- # set +x 00:04:06.871 ************************************ 00:04:06.871 START TEST dm_mount 00:04:06.871 ************************************ 00:04:06.871 07:23:10 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:06.871 07:23:10 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:06.871 07:23:10 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:06.871 07:23:10 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:06.871 07:23:10 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:06.871 07:23:10 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:06.871 07:23:10 -- setup/common.sh@40 -- # local part_no=2 00:04:06.871 07:23:10 -- setup/common.sh@41 -- # local size=1073741824 00:04:06.871 07:23:10 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:06.871 07:23:10 -- setup/common.sh@44 -- # parts=() 00:04:06.871 07:23:10 -- setup/common.sh@44 -- # local parts 00:04:06.871 07:23:10 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:06.871 07:23:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.871 07:23:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:06.871 07:23:10 -- setup/common.sh@46 -- # (( part++ )) 00:04:06.871 07:23:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.871 07:23:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:06.871 07:23:10 -- setup/common.sh@46 -- # (( part++ )) 00:04:06.871 07:23:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.871 07:23:10 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:06.871 07:23:10 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:06.871 07:23:10 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:07.811 Creating new GPT entries in memory. 00:04:07.811 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:07.811 other utilities. 00:04:07.811 07:23:11 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:07.811 07:23:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.811 07:23:11 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:07.811 07:23:11 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:07.811 07:23:11 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:08.750 Creating new GPT entries in memory. 00:04:08.750 The operation has completed successfully. 00:04:08.750 07:23:12 -- setup/common.sh@57 -- # (( part++ )) 00:04:08.750 07:23:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:08.750 07:23:12 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:08.750 07:23:12 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:08.750 07:23:12 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:10.129 The operation has completed successfully. 00:04:10.129 07:23:13 -- setup/common.sh@57 -- # (( part++ )) 00:04:10.129 07:23:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.129 07:23:13 -- setup/common.sh@62 -- # wait 3929000 00:04:10.129 07:23:13 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:10.129 07:23:13 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:10.129 07:23:13 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:10.129 07:23:13 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:10.129 07:23:13 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:10.129 07:23:13 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:10.129 07:23:13 -- setup/devices.sh@161 -- # break 00:04:10.129 07:23:13 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:10.129 07:23:13 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:10.129 07:23:13 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:10.129 07:23:13 -- setup/devices.sh@166 -- # dm=dm-2 00:04:10.129 07:23:13 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:10.129 07:23:13 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:10.129 07:23:13 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:10.129 07:23:13 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:10.129 07:23:13 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:10.129 07:23:13 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:10.129 07:23:13 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:10.130 07:23:13 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:10.130 07:23:13 -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:10.130 07:23:13 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:10.130 07:23:13 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:10.130 07:23:13 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:10.130 07:23:13 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:10.130 07:23:13 -- setup/devices.sh@53 -- # local found=0 00:04:10.130 07:23:13 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:10.130 07:23:13 -- setup/devices.sh@56 -- # : 00:04:10.130 07:23:13 -- setup/devices.sh@59 -- # local pci status 00:04:10.130 07:23:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.130 07:23:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:10.130 07:23:13 -- setup/devices.sh@47 -- # setup output config 00:04:10.130 07:23:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.130 07:23:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:12.667 07:23:16 -- setup/devices.sh@63 -- # found=1 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.667 07:23:16 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:12.667 07:23:16 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.667 07:23:16 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:12.667 07:23:16 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:12.667 07:23:16 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:12.667 07:23:16 -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:12.667 07:23:16 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:12.667 07:23:16 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:12.667 07:23:16 -- setup/devices.sh@50 -- # local mount_point= 00:04:12.667 07:23:16 -- setup/devices.sh@51 -- # local test_file= 00:04:12.667 07:23:16 -- setup/devices.sh@53 -- # local found=0 00:04:12.667 07:23:16 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:12.667 07:23:16 -- setup/devices.sh@59 -- # local pci status 00:04:12.667 07:23:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.667 07:23:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:12.667 07:23:16 -- setup/devices.sh@47 -- # setup output config 00:04:12.667 07:23:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.667 07:23:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:15.204 07:23:18 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.204 07:23:18 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:15.204 07:23:18 -- setup/devices.sh@63 -- # found=1 00:04:15.204 07:23:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.204 07:23:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.204 07:23:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.204 07:23:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.204 07:23:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.204 07:23:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.204 07:23:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.204 07:23:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.204 07:23:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.204 07:23:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.204 07:23:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.204 07:23:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.204 07:23:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.204 07:23:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.204 07:23:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.204 07:23:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.204 07:23:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.204 07:23:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.204 07:23:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.204 07:23:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.204 07:23:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.204 07:23:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.204 07:23:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.204 07:23:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.204 07:23:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.204 07:23:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.204 07:23:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.204 07:23:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.204 07:23:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.205 07:23:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.205 07:23:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.205 07:23:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:15.205 07:23:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.464 07:23:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.464 07:23:19 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:15.464 07:23:19 -- setup/devices.sh@68 -- # return 0 00:04:15.464 07:23:19 -- setup/devices.sh@187 -- # cleanup_dm 00:04:15.464 07:23:19 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.464 07:23:19 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:15.464 07:23:19 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:15.464 07:23:19 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.464 07:23:19 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:15.464 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:15.464 07:23:19 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:15.464 07:23:19 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:15.464 00:04:15.464 real 0m8.587s 00:04:15.464 user 0m2.120s 00:04:15.464 sys 0m3.512s 00:04:15.464 07:23:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.464 07:23:19 -- common/autotest_common.sh@10 -- # set +x 00:04:15.464 ************************************ 00:04:15.464 END TEST dm_mount 00:04:15.464 ************************************ 00:04:15.464 07:23:19 -- setup/devices.sh@1 -- # cleanup 00:04:15.464 07:23:19 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:15.464 07:23:19 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.464 07:23:19 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.464 07:23:19 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:15.464 07:23:19 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.464 07:23:19 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:15.724 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:15.724 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:15.724 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:15.724 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:15.724 07:23:19 -- setup/devices.sh@12 -- # cleanup_dm 00:04:15.724 07:23:19 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:15.724 07:23:19 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:15.724 07:23:19 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.724 07:23:19 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:15.724 07:23:19 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.724 07:23:19 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:15.724 00:04:15.724 real 0m22.457s 00:04:15.724 user 0m6.317s 00:04:15.724 sys 0m10.677s 00:04:15.724 07:23:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.724 07:23:19 -- common/autotest_common.sh@10 -- # set +x 00:04:15.724 ************************************ 00:04:15.724 END TEST devices 00:04:15.724 ************************************ 00:04:15.724 00:04:15.724 real 1m15.458s 00:04:15.724 user 0m24.892s 00:04:15.724 sys 0m41.279s 00:04:15.724 07:23:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.724 07:23:19 -- common/autotest_common.sh@10 -- # set +x 00:04:15.724 ************************************ 00:04:15.724 END TEST setup.sh 00:04:15.724 ************************************ 00:04:15.724 07:23:19 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:18.262 Hugepages 00:04:18.262 node hugesize free / total 00:04:18.262 node0 1048576kB 0 / 0 00:04:18.262 node0 2048kB 2048 / 2048 00:04:18.262 node1 1048576kB 0 / 0 00:04:18.262 node1 2048kB 0 / 0 00:04:18.262 00:04:18.262 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.262 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:18.262 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:18.262 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:18.262 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:18.262 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:18.262 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:18.262 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:18.262 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:18.522 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:18.522 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:18.522 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:18.522 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:18.522 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:18.522 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:18.522 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:18.522 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:18.522 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:18.522 07:23:22 -- spdk/autotest.sh@141 -- # uname -s 00:04:18.522 07:23:22 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:18.522 07:23:22 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:18.522 07:23:22 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.061 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:21.061 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:21.061 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:21.061 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:21.061 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:21.061 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:21.321 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:21.321 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:21.321 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:21.321 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:21.321 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:21.321 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:21.321 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:21.321 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:21.321 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:21.321 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:22.260 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:22.260 07:23:26 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:23.199 07:23:27 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:23.199 07:23:27 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:23.199 07:23:27 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:23.199 07:23:27 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:23.199 07:23:27 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:23.199 07:23:27 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:23.199 07:23:27 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:23.199 07:23:27 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:23.199 07:23:27 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:23.199 07:23:27 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:23.199 07:23:27 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:23.199 07:23:27 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.746 Waiting for block devices as requested 00:04:25.746 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:26.005 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:26.005 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:26.005 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:26.264 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:26.264 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:26.264 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:26.523 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:26.523 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:26.523 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:26.523 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:26.782 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:26.782 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:26.782 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:27.042 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:27.042 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:27.042 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:27.042 07:23:30 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:27.042 07:23:30 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:27.042 07:23:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:27.042 07:23:30 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:27.042 07:23:31 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:27.042 07:23:31 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:27.042 07:23:31 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:27.042 07:23:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:27.042 07:23:31 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:27.042 07:23:31 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:27.042 07:23:31 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:27.042 07:23:31 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:27.042 07:23:31 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:27.301 07:23:31 -- common/autotest_common.sh@1530 -- # oacs=' 0xf' 00:04:27.302 07:23:31 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:27.302 07:23:31 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:27.302 07:23:31 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:27.302 07:23:31 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:27.302 07:23:31 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:27.302 07:23:31 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:27.302 07:23:31 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:27.302 07:23:31 -- common/autotest_common.sh@1542 -- # continue 00:04:27.302 07:23:31 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:27.302 07:23:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:27.302 07:23:31 -- common/autotest_common.sh@10 -- # set +x 00:04:27.302 07:23:31 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:27.302 07:23:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:27.302 07:23:31 -- common/autotest_common.sh@10 -- # set +x 00:04:27.302 07:23:31 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:29.839 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:29.839 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:29.839 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:29.839 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:29.839 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:29.839 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:29.839 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:29.839 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:29.839 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:29.839 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:29.839 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:29.839 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:29.839 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:29.839 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:29.839 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:29.839 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:30.408 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:30.667 07:23:34 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:30.667 07:23:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:30.667 07:23:34 -- common/autotest_common.sh@10 -- # set +x 00:04:30.667 07:23:34 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:30.667 07:23:34 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:30.667 07:23:34 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:30.667 07:23:34 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:30.667 07:23:34 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:30.667 07:23:34 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:30.667 07:23:34 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:30.667 07:23:34 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:30.667 07:23:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:30.667 07:23:34 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:30.667 07:23:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:30.667 07:23:34 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:30.667 07:23:34 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:30.667 07:23:34 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:30.667 07:23:34 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:30.667 07:23:34 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:04:30.667 07:23:34 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:30.667 07:23:34 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:04:30.667 07:23:34 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:04:30.667 07:23:34 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:04:30.667 07:23:34 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3937826 00:04:30.667 07:23:34 -- common/autotest_common.sh@1583 -- # waitforlisten 3937826 00:04:30.667 07:23:34 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.667 07:23:34 -- common/autotest_common.sh@819 -- # '[' -z 3937826 ']' 00:04:30.667 07:23:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.667 07:23:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:30.667 07:23:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.667 07:23:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:30.667 07:23:34 -- common/autotest_common.sh@10 -- # set +x 00:04:30.667 [2024-10-07 07:23:34.608819] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:30.667 [2024-10-07 07:23:34.608873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937826 ] 00:04:30.667 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.927 [2024-10-07 07:23:34.669507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.927 [2024-10-07 07:23:34.751669] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:30.927 [2024-10-07 07:23:34.751800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.495 07:23:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:31.495 07:23:35 -- common/autotest_common.sh@852 -- # return 0 00:04:31.495 07:23:35 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:31.495 07:23:35 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:31.495 07:23:35 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:34.784 nvme0n1 00:04:34.784 07:23:38 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:34.784 [2024-10-07 07:23:38.573873] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:34.784 [2024-10-07 07:23:38.573904] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:34.784 request: 00:04:34.784 { 00:04:34.784 "nvme_ctrlr_name": "nvme0", 00:04:34.784 "password": "test", 00:04:34.784 "method": "bdev_nvme_opal_revert", 00:04:34.784 "req_id": 1 00:04:34.784 } 00:04:34.784 Got JSON-RPC error response 00:04:34.784 response: 00:04:34.784 { 00:04:34.784 "code": -32603, 00:04:34.784 "message": "Internal error" 00:04:34.784 } 00:04:34.784 07:23:38 -- common/autotest_common.sh@1589 -- # true 00:04:34.784 07:23:38 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:34.784 07:23:38 -- common/autotest_common.sh@1593 -- # killprocess 3937826 00:04:34.784 07:23:38 -- common/autotest_common.sh@926 -- # '[' -z 3937826 ']' 00:04:34.784 07:23:38 -- common/autotest_common.sh@930 -- # kill -0 3937826 00:04:34.784 07:23:38 -- common/autotest_common.sh@931 -- # uname 00:04:34.784 07:23:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:34.784 07:23:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3937826 00:04:34.784 07:23:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:34.784 07:23:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:34.784 07:23:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3937826' 00:04:34.784 killing process with pid 3937826 00:04:34.784 07:23:38 -- common/autotest_common.sh@945 -- # kill 3937826 00:04:34.784 07:23:38 -- common/autotest_common.sh@950 -- # wait 3937826 00:04:36.689 07:23:40 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:36.689 07:23:40 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:36.689 07:23:40 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:36.689 07:23:40 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:36.689 07:23:40 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:36.689 07:23:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:36.689 07:23:40 -- common/autotest_common.sh@10 -- # set +x 00:04:36.689 07:23:40 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:36.689 07:23:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.689 07:23:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.689 07:23:40 -- common/autotest_common.sh@10 -- # set +x 00:04:36.689 ************************************ 00:04:36.689 START TEST env 00:04:36.689 ************************************ 00:04:36.689 07:23:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:36.689 * Looking for test storage... 00:04:36.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:36.689 07:23:40 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:36.689 07:23:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.689 07:23:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.689 07:23:40 -- common/autotest_common.sh@10 -- # set +x 00:04:36.689 ************************************ 00:04:36.689 START TEST env_memory 00:04:36.689 ************************************ 00:04:36.689 07:23:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:36.689 00:04:36.689 00:04:36.689 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.689 http://cunit.sourceforge.net/ 00:04:36.689 00:04:36.689 00:04:36.689 Suite: memory 00:04:36.689 Test: alloc and free memory map ...[2024-10-07 07:23:40.430314] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:36.689 passed 00:04:36.689 Test: mem map translation ...[2024-10-07 07:23:40.448175] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:36.689 [2024-10-07 07:23:40.448190] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:36.689 [2024-10-07 07:23:40.448224] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:36.689 [2024-10-07 07:23:40.448230] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:36.689 passed 00:04:36.689 Test: mem map registration ...[2024-10-07 07:23:40.484361] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:36.689 [2024-10-07 07:23:40.484375] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:36.689 passed 00:04:36.689 Test: mem map adjacent registrations ...passed 00:04:36.689 00:04:36.689 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.689 suites 1 1 n/a 0 0 00:04:36.689 tests 4 4 4 0 0 00:04:36.689 asserts 152 152 152 0 n/a 00:04:36.689 00:04:36.689 Elapsed time = 0.138 seconds 00:04:36.689 00:04:36.689 real 0m0.150s 00:04:36.689 user 0m0.134s 00:04:36.689 sys 0m0.015s 00:04:36.689 07:23:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.689 07:23:40 -- common/autotest_common.sh@10 -- # set +x 00:04:36.689 ************************************ 00:04:36.689 END TEST env_memory 00:04:36.689 ************************************ 00:04:36.689 07:23:40 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:36.689 07:23:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.689 07:23:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.689 07:23:40 -- common/autotest_common.sh@10 -- # set +x 00:04:36.689 ************************************ 00:04:36.689 START TEST env_vtophys 00:04:36.689 ************************************ 00:04:36.689 07:23:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:36.689 EAL: lib.eal log level changed from notice to debug 00:04:36.689 EAL: Detected lcore 0 as core 0 on socket 0 00:04:36.689 EAL: Detected lcore 1 as core 1 on socket 0 00:04:36.689 EAL: Detected lcore 2 as core 2 on socket 0 00:04:36.689 EAL: Detected lcore 3 as core 3 on socket 0 00:04:36.689 EAL: Detected lcore 4 as core 4 on socket 0 00:04:36.689 EAL: Detected lcore 5 as core 5 on socket 0 00:04:36.689 EAL: Detected lcore 6 as core 6 on socket 0 00:04:36.689 EAL: Detected lcore 7 as core 8 on socket 0 00:04:36.689 EAL: Detected lcore 8 as core 9 on socket 0 00:04:36.689 EAL: Detected lcore 9 as core 10 on socket 0 00:04:36.689 EAL: Detected lcore 10 as core 11 on socket 0 00:04:36.689 EAL: Detected lcore 11 as core 12 on socket 0 00:04:36.689 EAL: Detected lcore 12 as core 13 on socket 0 00:04:36.689 EAL: Detected lcore 13 as core 16 on socket 0 00:04:36.689 EAL: Detected lcore 14 as core 17 on socket 0 00:04:36.689 EAL: Detected lcore 15 as core 18 on socket 0 00:04:36.689 EAL: Detected lcore 16 as core 19 on socket 0 00:04:36.689 EAL: Detected lcore 17 as core 20 on socket 0 00:04:36.689 EAL: Detected lcore 18 as core 21 on socket 0 00:04:36.689 EAL: Detected lcore 19 as core 25 on socket 0 00:04:36.689 EAL: Detected lcore 20 as core 26 on socket 0 00:04:36.689 EAL: Detected lcore 21 as core 27 on socket 0 00:04:36.689 EAL: Detected lcore 22 as core 28 on socket 0 00:04:36.689 EAL: Detected lcore 23 as core 29 on socket 0 00:04:36.689 EAL: Detected lcore 24 as core 0 on socket 1 00:04:36.689 EAL: Detected lcore 25 as core 1 on socket 1 00:04:36.689 EAL: Detected lcore 26 as core 2 on socket 1 00:04:36.689 EAL: Detected lcore 27 as core 3 on socket 1 00:04:36.689 EAL: Detected lcore 28 as core 4 on socket 1 00:04:36.689 EAL: Detected lcore 29 as core 5 on socket 1 00:04:36.689 EAL: Detected lcore 30 as core 6 on socket 1 00:04:36.689 EAL: Detected lcore 31 as core 8 on socket 1 00:04:36.689 EAL: Detected lcore 32 as core 9 on socket 1 00:04:36.689 EAL: Detected lcore 33 as core 10 on socket 1 00:04:36.689 EAL: Detected lcore 34 as core 11 on socket 1 00:04:36.689 EAL: Detected lcore 35 as core 12 on socket 1 00:04:36.689 EAL: Detected lcore 36 as core 13 on socket 1 00:04:36.689 EAL: Detected lcore 37 as core 16 on socket 1 00:04:36.689 EAL: Detected lcore 38 as core 17 on socket 1 00:04:36.689 EAL: Detected lcore 39 as core 18 on socket 1 00:04:36.689 EAL: Detected lcore 40 as core 19 on socket 1 00:04:36.689 EAL: Detected lcore 41 as core 20 on socket 1 00:04:36.689 EAL: Detected lcore 42 as core 21 on socket 1 00:04:36.689 EAL: Detected lcore 43 as core 25 on socket 1 00:04:36.689 EAL: Detected lcore 44 as core 26 on socket 1 00:04:36.689 EAL: Detected lcore 45 as core 27 on socket 1 00:04:36.689 EAL: Detected lcore 46 as core 28 on socket 1 00:04:36.689 EAL: Detected lcore 47 as core 29 on socket 1 00:04:36.689 EAL: Detected lcore 48 as core 0 on socket 0 00:04:36.689 EAL: Detected lcore 49 as core 1 on socket 0 00:04:36.689 EAL: Detected lcore 50 as core 2 on socket 0 00:04:36.689 EAL: Detected lcore 51 as core 3 on socket 0 00:04:36.689 EAL: Detected lcore 52 as core 4 on socket 0 00:04:36.689 EAL: Detected lcore 53 as core 5 on socket 0 00:04:36.689 EAL: Detected lcore 54 as core 6 on socket 0 00:04:36.689 EAL: Detected lcore 55 as core 8 on socket 0 00:04:36.689 EAL: Detected lcore 56 as core 9 on socket 0 00:04:36.689 EAL: Detected lcore 57 as core 10 on socket 0 00:04:36.689 EAL: Detected lcore 58 as core 11 on socket 0 00:04:36.689 EAL: Detected lcore 59 as core 12 on socket 0 00:04:36.689 EAL: Detected lcore 60 as core 13 on socket 0 00:04:36.689 EAL: Detected lcore 61 as core 16 on socket 0 00:04:36.689 EAL: Detected lcore 62 as core 17 on socket 0 00:04:36.689 EAL: Detected lcore 63 as core 18 on socket 0 00:04:36.689 EAL: Detected lcore 64 as core 19 on socket 0 00:04:36.689 EAL: Detected lcore 65 as core 20 on socket 0 00:04:36.689 EAL: Detected lcore 66 as core 21 on socket 0 00:04:36.689 EAL: Detected lcore 67 as core 25 on socket 0 00:04:36.689 EAL: Detected lcore 68 as core 26 on socket 0 00:04:36.689 EAL: Detected lcore 69 as core 27 on socket 0 00:04:36.689 EAL: Detected lcore 70 as core 28 on socket 0 00:04:36.689 EAL: Detected lcore 71 as core 29 on socket 0 00:04:36.689 EAL: Detected lcore 72 as core 0 on socket 1 00:04:36.689 EAL: Detected lcore 73 as core 1 on socket 1 00:04:36.689 EAL: Detected lcore 74 as core 2 on socket 1 00:04:36.689 EAL: Detected lcore 75 as core 3 on socket 1 00:04:36.689 EAL: Detected lcore 76 as core 4 on socket 1 00:04:36.689 EAL: Detected lcore 77 as core 5 on socket 1 00:04:36.689 EAL: Detected lcore 78 as core 6 on socket 1 00:04:36.689 EAL: Detected lcore 79 as core 8 on socket 1 00:04:36.689 EAL: Detected lcore 80 as core 9 on socket 1 00:04:36.689 EAL: Detected lcore 81 as core 10 on socket 1 00:04:36.689 EAL: Detected lcore 82 as core 11 on socket 1 00:04:36.689 EAL: Detected lcore 83 as core 12 on socket 1 00:04:36.689 EAL: Detected lcore 84 as core 13 on socket 1 00:04:36.689 EAL: Detected lcore 85 as core 16 on socket 1 00:04:36.689 EAL: Detected lcore 86 as core 17 on socket 1 00:04:36.689 EAL: Detected lcore 87 as core 18 on socket 1 00:04:36.690 EAL: Detected lcore 88 as core 19 on socket 1 00:04:36.690 EAL: Detected lcore 89 as core 20 on socket 1 00:04:36.690 EAL: Detected lcore 90 as core 21 on socket 1 00:04:36.690 EAL: Detected lcore 91 as core 25 on socket 1 00:04:36.690 EAL: Detected lcore 92 as core 26 on socket 1 00:04:36.690 EAL: Detected lcore 93 as core 27 on socket 1 00:04:36.690 EAL: Detected lcore 94 as core 28 on socket 1 00:04:36.690 EAL: Detected lcore 95 as core 29 on socket 1 00:04:36.690 EAL: Maximum logical cores by configuration: 128 00:04:36.690 EAL: Detected CPU lcores: 96 00:04:36.690 EAL: Detected NUMA nodes: 2 00:04:36.690 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:36.690 EAL: Detected shared linkage of DPDK 00:04:36.690 EAL: No shared files mode enabled, IPC will be disabled 00:04:36.690 EAL: Bus pci wants IOVA as 'DC' 00:04:36.690 EAL: Buses did not request a specific IOVA mode. 00:04:36.690 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:36.690 EAL: Selected IOVA mode 'VA' 00:04:36.690 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.690 EAL: Probing VFIO support... 00:04:36.690 EAL: IOMMU type 1 (Type 1) is supported 00:04:36.690 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:36.690 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:36.690 EAL: VFIO support initialized 00:04:36.690 EAL: Ask a virtual area of 0x2e000 bytes 00:04:36.690 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:36.690 EAL: Setting up physically contiguous memory... 00:04:36.690 EAL: Setting maximum number of open files to 524288 00:04:36.690 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:36.690 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:36.690 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:36.690 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.690 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:36.690 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.690 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.690 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:36.690 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:36.690 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.690 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:36.690 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.690 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.690 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:36.690 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:36.690 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.690 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:36.690 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.690 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.690 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:36.690 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:36.690 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.690 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:36.690 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.690 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.690 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:36.690 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:36.690 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:36.690 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.690 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:36.690 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.690 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.690 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:36.690 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:36.690 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.690 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:36.690 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.690 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.690 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:36.690 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:36.690 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.690 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:36.690 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.690 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.690 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:36.690 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:36.690 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.690 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:36.690 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:36.690 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.690 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:36.690 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:36.690 EAL: Hugepages will be freed exactly as allocated. 00:04:36.690 EAL: No shared files mode enabled, IPC is disabled 00:04:36.690 EAL: No shared files mode enabled, IPC is disabled 00:04:36.690 EAL: TSC frequency is ~2100000 KHz 00:04:36.690 EAL: Main lcore 0 is ready (tid=7f522e717a00;cpuset=[0]) 00:04:36.690 EAL: Trying to obtain current memory policy. 00:04:36.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.690 EAL: Restoring previous memory policy: 0 00:04:36.690 EAL: request: mp_malloc_sync 00:04:36.690 EAL: No shared files mode enabled, IPC is disabled 00:04:36.690 EAL: Heap on socket 0 was expanded by 2MB 00:04:36.690 EAL: No shared files mode enabled, IPC is disabled 00:04:36.690 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:36.690 EAL: Mem event callback 'spdk:(nil)' registered 00:04:36.690 00:04:36.690 00:04:36.690 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.690 http://cunit.sourceforge.net/ 00:04:36.690 00:04:36.690 00:04:36.690 Suite: components_suite 00:04:36.690 Test: vtophys_malloc_test ...passed 00:04:36.690 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:36.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.690 EAL: Restoring previous memory policy: 4 00:04:36.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.690 EAL: request: mp_malloc_sync 00:04:36.690 EAL: No shared files mode enabled, IPC is disabled 00:04:36.690 EAL: Heap on socket 0 was expanded by 4MB 00:04:36.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.690 EAL: request: mp_malloc_sync 00:04:36.690 EAL: No shared files mode enabled, IPC is disabled 00:04:36.690 EAL: Heap on socket 0 was shrunk by 4MB 00:04:36.690 EAL: Trying to obtain current memory policy. 00:04:36.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.690 EAL: Restoring previous memory policy: 4 00:04:36.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.690 EAL: request: mp_malloc_sync 00:04:36.690 EAL: No shared files mode enabled, IPC is disabled 00:04:36.690 EAL: Heap on socket 0 was expanded by 6MB 00:04:36.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.690 EAL: request: mp_malloc_sync 00:04:36.690 EAL: No shared files mode enabled, IPC is disabled 00:04:36.690 EAL: Heap on socket 0 was shrunk by 6MB 00:04:36.690 EAL: Trying to obtain current memory policy. 00:04:36.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.690 EAL: Restoring previous memory policy: 4 00:04:36.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.690 EAL: request: mp_malloc_sync 00:04:36.690 EAL: No shared files mode enabled, IPC is disabled 00:04:36.690 EAL: Heap on socket 0 was expanded by 10MB 00:04:36.690 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.690 EAL: request: mp_malloc_sync 00:04:36.690 EAL: No shared files mode enabled, IPC is disabled 00:04:36.690 EAL: Heap on socket 0 was shrunk by 10MB 00:04:36.690 EAL: Trying to obtain current memory policy. 00:04:36.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.949 EAL: Restoring previous memory policy: 4 00:04:36.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.949 EAL: request: mp_malloc_sync 00:04:36.949 EAL: No shared files mode enabled, IPC is disabled 00:04:36.949 EAL: Heap on socket 0 was expanded by 18MB 00:04:36.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.949 EAL: request: mp_malloc_sync 00:04:36.949 EAL: No shared files mode enabled, IPC is disabled 00:04:36.949 EAL: Heap on socket 0 was shrunk by 18MB 00:04:36.949 EAL: Trying to obtain current memory policy. 00:04:36.949 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.949 EAL: Restoring previous memory policy: 4 00:04:36.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.949 EAL: request: mp_malloc_sync 00:04:36.949 EAL: No shared files mode enabled, IPC is disabled 00:04:36.949 EAL: Heap on socket 0 was expanded by 34MB 00:04:36.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.949 EAL: request: mp_malloc_sync 00:04:36.949 EAL: No shared files mode enabled, IPC is disabled 00:04:36.949 EAL: Heap on socket 0 was shrunk by 34MB 00:04:36.949 EAL: Trying to obtain current memory policy. 00:04:36.949 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.949 EAL: Restoring previous memory policy: 4 00:04:36.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.949 EAL: request: mp_malloc_sync 00:04:36.949 EAL: No shared files mode enabled, IPC is disabled 00:04:36.949 EAL: Heap on socket 0 was expanded by 66MB 00:04:36.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.949 EAL: request: mp_malloc_sync 00:04:36.949 EAL: No shared files mode enabled, IPC is disabled 00:04:36.949 EAL: Heap on socket 0 was shrunk by 66MB 00:04:36.949 EAL: Trying to obtain current memory policy. 00:04:36.949 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.949 EAL: Restoring previous memory policy: 4 00:04:36.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.949 EAL: request: mp_malloc_sync 00:04:36.949 EAL: No shared files mode enabled, IPC is disabled 00:04:36.949 EAL: Heap on socket 0 was expanded by 130MB 00:04:36.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.949 EAL: request: mp_malloc_sync 00:04:36.949 EAL: No shared files mode enabled, IPC is disabled 00:04:36.949 EAL: Heap on socket 0 was shrunk by 130MB 00:04:36.949 EAL: Trying to obtain current memory policy. 00:04:36.949 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.949 EAL: Restoring previous memory policy: 4 00:04:36.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.949 EAL: request: mp_malloc_sync 00:04:36.949 EAL: No shared files mode enabled, IPC is disabled 00:04:36.949 EAL: Heap on socket 0 was expanded by 258MB 00:04:36.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.949 EAL: request: mp_malloc_sync 00:04:36.949 EAL: No shared files mode enabled, IPC is disabled 00:04:36.949 EAL: Heap on socket 0 was shrunk by 258MB 00:04:36.949 EAL: Trying to obtain current memory policy. 00:04:36.949 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.206 EAL: Restoring previous memory policy: 4 00:04:37.206 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.206 EAL: request: mp_malloc_sync 00:04:37.206 EAL: No shared files mode enabled, IPC is disabled 00:04:37.206 EAL: Heap on socket 0 was expanded by 514MB 00:04:37.206 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.206 EAL: request: mp_malloc_sync 00:04:37.206 EAL: No shared files mode enabled, IPC is disabled 00:04:37.206 EAL: Heap on socket 0 was shrunk by 514MB 00:04:37.207 EAL: Trying to obtain current memory policy. 00:04:37.207 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.465 EAL: Restoring previous memory policy: 4 00:04:37.465 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.465 EAL: request: mp_malloc_sync 00:04:37.465 EAL: No shared files mode enabled, IPC is disabled 00:04:37.465 EAL: Heap on socket 0 was expanded by 1026MB 00:04:37.724 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.724 EAL: request: mp_malloc_sync 00:04:37.724 EAL: No shared files mode enabled, IPC is disabled 00:04:37.724 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:37.724 passed 00:04:37.724 00:04:37.724 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.724 suites 1 1 n/a 0 0 00:04:37.724 tests 2 2 2 0 0 00:04:37.724 asserts 497 497 497 0 n/a 00:04:37.724 00:04:37.724 Elapsed time = 0.960 seconds 00:04:37.724 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.724 EAL: request: mp_malloc_sync 00:04:37.724 EAL: No shared files mode enabled, IPC is disabled 00:04:37.724 EAL: Heap on socket 0 was shrunk by 2MB 00:04:37.724 EAL: No shared files mode enabled, IPC is disabled 00:04:37.724 EAL: No shared files mode enabled, IPC is disabled 00:04:37.724 EAL: No shared files mode enabled, IPC is disabled 00:04:37.724 00:04:37.724 real 0m1.078s 00:04:37.724 user 0m0.624s 00:04:37.724 sys 0m0.419s 00:04:37.724 07:23:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.724 07:23:41 -- common/autotest_common.sh@10 -- # set +x 00:04:37.724 ************************************ 00:04:37.724 END TEST env_vtophys 00:04:37.724 ************************************ 00:04:37.724 07:23:41 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:37.724 07:23:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.724 07:23:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.724 07:23:41 -- common/autotest_common.sh@10 -- # set +x 00:04:37.724 ************************************ 00:04:37.724 START TEST env_pci 00:04:37.724 ************************************ 00:04:37.724 07:23:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:37.984 00:04:37.984 00:04:37.984 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.984 http://cunit.sourceforge.net/ 00:04:37.984 00:04:37.984 00:04:37.984 Suite: pci 00:04:37.984 Test: pci_hook ...[2024-10-07 07:23:41.705280] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3939275 has claimed it 00:04:37.984 EAL: Cannot find device (10000:00:01.0) 00:04:37.984 EAL: Failed to attach device on primary process 00:04:37.984 passed 00:04:37.984 00:04:37.984 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.984 suites 1 1 n/a 0 0 00:04:37.984 tests 1 1 1 0 0 00:04:37.984 asserts 25 25 25 0 n/a 00:04:37.984 00:04:37.984 Elapsed time = 0.027 seconds 00:04:37.984 00:04:37.984 real 0m0.047s 00:04:37.984 user 0m0.014s 00:04:37.984 sys 0m0.033s 00:04:37.984 07:23:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.984 07:23:41 -- common/autotest_common.sh@10 -- # set +x 00:04:37.984 ************************************ 00:04:37.984 END TEST env_pci 00:04:37.984 ************************************ 00:04:37.984 07:23:41 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:37.984 07:23:41 -- env/env.sh@15 -- # uname 00:04:37.984 07:23:41 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:37.984 07:23:41 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:37.984 07:23:41 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.984 07:23:41 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:37.984 07:23:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.984 07:23:41 -- common/autotest_common.sh@10 -- # set +x 00:04:37.984 ************************************ 00:04:37.984 START TEST env_dpdk_post_init 00:04:37.984 ************************************ 00:04:37.984 07:23:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.984 EAL: Detected CPU lcores: 96 00:04:37.984 EAL: Detected NUMA nodes: 2 00:04:37.984 EAL: Detected shared linkage of DPDK 00:04:37.984 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.984 EAL: Selected IOVA mode 'VA' 00:04:37.984 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.984 EAL: VFIO support initialized 00:04:37.984 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.984 EAL: Using IOMMU type 1 (Type 1) 00:04:37.984 EAL: Ignore mapping IO port bar(1) 00:04:37.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:37.984 EAL: Ignore mapping IO port bar(1) 00:04:37.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:37.984 EAL: Ignore mapping IO port bar(1) 00:04:37.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:37.984 EAL: Ignore mapping IO port bar(1) 00:04:37.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:37.984 EAL: Ignore mapping IO port bar(1) 00:04:37.984 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:38.244 EAL: Ignore mapping IO port bar(1) 00:04:38.244 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:38.244 EAL: Ignore mapping IO port bar(1) 00:04:38.244 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:38.244 EAL: Ignore mapping IO port bar(1) 00:04:38.244 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:38.814 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:38.814 EAL: Ignore mapping IO port bar(1) 00:04:38.814 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:38.814 EAL: Ignore mapping IO port bar(1) 00:04:38.814 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:38.814 EAL: Ignore mapping IO port bar(1) 00:04:38.814 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:38.814 EAL: Ignore mapping IO port bar(1) 00:04:38.814 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:38.814 EAL: Ignore mapping IO port bar(1) 00:04:38.814 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:39.073 EAL: Ignore mapping IO port bar(1) 00:04:39.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:39.073 EAL: Ignore mapping IO port bar(1) 00:04:39.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:39.073 EAL: Ignore mapping IO port bar(1) 00:04:39.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:42.364 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:42.364 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:42.364 Starting DPDK initialization... 00:04:42.364 Starting SPDK post initialization... 00:04:42.364 SPDK NVMe probe 00:04:42.364 Attaching to 0000:5e:00.0 00:04:42.364 Attached to 0000:5e:00.0 00:04:42.364 Cleaning up... 00:04:42.364 00:04:42.364 real 0m4.302s 00:04:42.364 user 0m3.217s 00:04:42.364 sys 0m0.159s 00:04:42.364 07:23:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.364 07:23:46 -- common/autotest_common.sh@10 -- # set +x 00:04:42.364 ************************************ 00:04:42.364 END TEST env_dpdk_post_init 00:04:42.364 ************************************ 00:04:42.364 07:23:46 -- env/env.sh@26 -- # uname 00:04:42.364 07:23:46 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:42.364 07:23:46 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.364 07:23:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.364 07:23:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.364 07:23:46 -- common/autotest_common.sh@10 -- # set +x 00:04:42.364 ************************************ 00:04:42.364 START TEST env_mem_callbacks 00:04:42.364 ************************************ 00:04:42.364 07:23:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.364 EAL: Detected CPU lcores: 96 00:04:42.364 EAL: Detected NUMA nodes: 2 00:04:42.364 EAL: Detected shared linkage of DPDK 00:04:42.364 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:42.364 EAL: Selected IOVA mode 'VA' 00:04:42.364 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.364 EAL: VFIO support initialized 00:04:42.364 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.364 00:04:42.364 00:04:42.364 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.364 http://cunit.sourceforge.net/ 00:04:42.364 00:04:42.364 00:04:42.364 Suite: memory 00:04:42.364 Test: test ... 00:04:42.364 register 0x200000200000 2097152 00:04:42.364 malloc 3145728 00:04:42.364 register 0x200000400000 4194304 00:04:42.364 buf 0x200000500000 len 3145728 PASSED 00:04:42.364 malloc 64 00:04:42.364 buf 0x2000004fff40 len 64 PASSED 00:04:42.364 malloc 4194304 00:04:42.364 register 0x200000800000 6291456 00:04:42.364 buf 0x200000a00000 len 4194304 PASSED 00:04:42.364 free 0x200000500000 3145728 00:04:42.364 free 0x2000004fff40 64 00:04:42.364 unregister 0x200000400000 4194304 PASSED 00:04:42.364 free 0x200000a00000 4194304 00:04:42.364 unregister 0x200000800000 6291456 PASSED 00:04:42.364 malloc 8388608 00:04:42.364 register 0x200000400000 10485760 00:04:42.364 buf 0x200000600000 len 8388608 PASSED 00:04:42.364 free 0x200000600000 8388608 00:04:42.364 unregister 0x200000400000 10485760 PASSED 00:04:42.364 passed 00:04:42.364 00:04:42.364 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.364 suites 1 1 n/a 0 0 00:04:42.364 tests 1 1 1 0 0 00:04:42.364 asserts 15 15 15 0 n/a 00:04:42.364 00:04:42.364 Elapsed time = 0.005 seconds 00:04:42.364 00:04:42.364 real 0m0.053s 00:04:42.364 user 0m0.017s 00:04:42.364 sys 0m0.036s 00:04:42.364 07:23:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.364 07:23:46 -- common/autotest_common.sh@10 -- # set +x 00:04:42.364 ************************************ 00:04:42.364 END TEST env_mem_callbacks 00:04:42.364 ************************************ 00:04:42.364 00:04:42.364 real 0m5.910s 00:04:42.364 user 0m4.127s 00:04:42.364 sys 0m0.855s 00:04:42.364 07:23:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.364 07:23:46 -- common/autotest_common.sh@10 -- # set +x 00:04:42.364 ************************************ 00:04:42.364 END TEST env 00:04:42.364 ************************************ 00:04:42.364 07:23:46 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.364 07:23:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.364 07:23:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.364 07:23:46 -- common/autotest_common.sh@10 -- # set +x 00:04:42.364 ************************************ 00:04:42.364 START TEST rpc 00:04:42.364 ************************************ 00:04:42.364 07:23:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.364 * Looking for test storage... 00:04:42.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:42.364 07:23:46 -- rpc/rpc.sh@65 -- # spdk_pid=3940086 00:04:42.364 07:23:46 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.364 07:23:46 -- rpc/rpc.sh@67 -- # waitforlisten 3940086 00:04:42.364 07:23:46 -- common/autotest_common.sh@819 -- # '[' -z 3940086 ']' 00:04:42.364 07:23:46 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:42.364 07:23:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.364 07:23:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:42.364 07:23:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.364 07:23:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:42.364 07:23:46 -- common/autotest_common.sh@10 -- # set +x 00:04:42.671 [2024-10-07 07:23:46.361413] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:42.671 [2024-10-07 07:23:46.361463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940086 ] 00:04:42.671 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.671 [2024-10-07 07:23:46.416102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.671 [2024-10-07 07:23:46.491305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:42.671 [2024-10-07 07:23:46.491416] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:42.671 [2024-10-07 07:23:46.491426] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3940086' to capture a snapshot of events at runtime. 00:04:42.671 [2024-10-07 07:23:46.491433] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3940086 for offline analysis/debug. 00:04:42.671 [2024-10-07 07:23:46.491450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.302 07:23:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:43.302 07:23:47 -- common/autotest_common.sh@852 -- # return 0 00:04:43.302 07:23:47 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.302 07:23:47 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.302 07:23:47 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.302 07:23:47 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.302 07:23:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.302 07:23:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.302 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.302 ************************************ 00:04:43.302 START TEST rpc_integrity 00:04:43.302 ************************************ 00:04:43.302 07:23:47 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:43.302 07:23:47 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.302 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:43.302 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.302 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:43.302 07:23:47 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.302 07:23:47 -- rpc/rpc.sh@13 -- # jq length 00:04:43.302 07:23:47 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.302 07:23:47 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.302 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:43.302 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.302 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:43.302 07:23:47 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.302 07:23:47 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.302 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:43.302 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.302 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:43.302 07:23:47 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.302 { 00:04:43.302 "name": "Malloc0", 00:04:43.302 "aliases": [ 00:04:43.302 "39d1dd31-5434-4d83-913c-3f5f304b0b6a" 00:04:43.302 ], 00:04:43.302 "product_name": "Malloc disk", 00:04:43.302 "block_size": 512, 00:04:43.302 "num_blocks": 16384, 00:04:43.302 "uuid": "39d1dd31-5434-4d83-913c-3f5f304b0b6a", 00:04:43.302 "assigned_rate_limits": { 00:04:43.302 "rw_ios_per_sec": 0, 00:04:43.302 "rw_mbytes_per_sec": 0, 00:04:43.302 "r_mbytes_per_sec": 0, 00:04:43.302 "w_mbytes_per_sec": 0 00:04:43.302 }, 00:04:43.302 "claimed": false, 00:04:43.302 "zoned": false, 00:04:43.302 "supported_io_types": { 00:04:43.302 "read": true, 00:04:43.302 "write": true, 00:04:43.302 "unmap": true, 00:04:43.302 "write_zeroes": true, 00:04:43.302 "flush": true, 00:04:43.302 "reset": true, 00:04:43.302 "compare": false, 00:04:43.302 "compare_and_write": false, 00:04:43.302 "abort": true, 00:04:43.302 "nvme_admin": false, 00:04:43.302 "nvme_io": false 00:04:43.302 }, 00:04:43.302 "memory_domains": [ 00:04:43.302 { 00:04:43.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.302 "dma_device_type": 2 00:04:43.302 } 00:04:43.302 ], 00:04:43.302 "driver_specific": {} 00:04:43.302 } 00:04:43.302 ]' 00:04:43.302 07:23:47 -- rpc/rpc.sh@17 -- # jq length 00:04:43.562 07:23:47 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.562 07:23:47 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.562 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:43.562 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.562 [2024-10-07 07:23:47.295888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.562 [2024-10-07 07:23:47.295918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.562 [2024-10-07 07:23:47.295931] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24cf750 00:04:43.562 [2024-10-07 07:23:47.295937] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.562 [2024-10-07 07:23:47.296999] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.562 [2024-10-07 07:23:47.297021] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.562 Passthru0 00:04:43.562 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:43.562 07:23:47 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.562 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:43.562 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.562 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:43.562 07:23:47 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.562 { 00:04:43.562 "name": "Malloc0", 00:04:43.562 "aliases": [ 00:04:43.562 "39d1dd31-5434-4d83-913c-3f5f304b0b6a" 00:04:43.562 ], 00:04:43.562 "product_name": "Malloc disk", 00:04:43.562 "block_size": 512, 00:04:43.562 "num_blocks": 16384, 00:04:43.562 "uuid": "39d1dd31-5434-4d83-913c-3f5f304b0b6a", 00:04:43.562 "assigned_rate_limits": { 00:04:43.562 "rw_ios_per_sec": 0, 00:04:43.562 "rw_mbytes_per_sec": 0, 00:04:43.562 "r_mbytes_per_sec": 0, 00:04:43.562 "w_mbytes_per_sec": 0 00:04:43.562 }, 00:04:43.562 "claimed": true, 00:04:43.562 "claim_type": "exclusive_write", 00:04:43.562 "zoned": false, 00:04:43.562 "supported_io_types": { 00:04:43.562 "read": true, 00:04:43.562 "write": true, 00:04:43.562 "unmap": true, 00:04:43.562 "write_zeroes": true, 00:04:43.562 "flush": true, 00:04:43.562 "reset": true, 00:04:43.562 "compare": false, 00:04:43.562 "compare_and_write": false, 00:04:43.562 "abort": true, 00:04:43.562 "nvme_admin": false, 00:04:43.562 "nvme_io": false 00:04:43.562 }, 00:04:43.562 "memory_domains": [ 00:04:43.562 { 00:04:43.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.562 "dma_device_type": 2 00:04:43.562 } 00:04:43.562 ], 00:04:43.562 "driver_specific": {} 00:04:43.562 }, 00:04:43.562 { 00:04:43.562 "name": "Passthru0", 00:04:43.562 "aliases": [ 00:04:43.562 "b97862cb-6cb0-50d4-88cb-0e8f50ff0b5e" 00:04:43.562 ], 00:04:43.562 "product_name": "passthru", 00:04:43.562 "block_size": 512, 00:04:43.562 "num_blocks": 16384, 00:04:43.562 "uuid": "b97862cb-6cb0-50d4-88cb-0e8f50ff0b5e", 00:04:43.562 "assigned_rate_limits": { 00:04:43.562 "rw_ios_per_sec": 0, 00:04:43.562 "rw_mbytes_per_sec": 0, 00:04:43.562 "r_mbytes_per_sec": 0, 00:04:43.562 "w_mbytes_per_sec": 0 00:04:43.562 }, 00:04:43.562 "claimed": false, 00:04:43.562 "zoned": false, 00:04:43.562 "supported_io_types": { 00:04:43.562 "read": true, 00:04:43.562 "write": true, 00:04:43.562 "unmap": true, 00:04:43.562 "write_zeroes": true, 00:04:43.562 "flush": true, 00:04:43.562 "reset": true, 00:04:43.562 "compare": false, 00:04:43.562 "compare_and_write": false, 00:04:43.562 "abort": true, 00:04:43.562 "nvme_admin": false, 00:04:43.562 "nvme_io": false 00:04:43.562 }, 00:04:43.562 "memory_domains": [ 00:04:43.562 { 00:04:43.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.562 "dma_device_type": 2 00:04:43.562 } 00:04:43.562 ], 00:04:43.562 "driver_specific": { 00:04:43.562 "passthru": { 00:04:43.562 "name": "Passthru0", 00:04:43.562 "base_bdev_name": "Malloc0" 00:04:43.562 } 00:04:43.562 } 00:04:43.562 } 00:04:43.562 ]' 00:04:43.562 07:23:47 -- rpc/rpc.sh@21 -- # jq length 00:04:43.562 07:23:47 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.562 07:23:47 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.562 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:43.562 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.562 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:43.562 07:23:47 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:43.562 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:43.562 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.562 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:43.562 07:23:47 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.562 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:43.562 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.562 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:43.562 07:23:47 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.563 07:23:47 -- rpc/rpc.sh@26 -- # jq length 00:04:43.563 07:23:47 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.563 00:04:43.563 real 0m0.232s 00:04:43.563 user 0m0.149s 00:04:43.563 sys 0m0.027s 00:04:43.563 07:23:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.563 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.563 ************************************ 00:04:43.563 END TEST rpc_integrity 00:04:43.563 ************************************ 00:04:43.563 07:23:47 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.563 07:23:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.563 07:23:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.563 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.563 ************************************ 00:04:43.563 START TEST rpc_plugins 00:04:43.563 ************************************ 00:04:43.563 07:23:47 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:43.563 07:23:47 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.563 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:43.563 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.563 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:43.563 07:23:47 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:43.563 07:23:47 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:43.563 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:43.563 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.563 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:43.563 07:23:47 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:43.563 { 00:04:43.563 "name": "Malloc1", 00:04:43.563 "aliases": [ 00:04:43.563 "5ea932ce-c2dd-4228-bab0-689f39c2cee7" 00:04:43.563 ], 00:04:43.563 "product_name": "Malloc disk", 00:04:43.563 "block_size": 4096, 00:04:43.563 "num_blocks": 256, 00:04:43.563 "uuid": "5ea932ce-c2dd-4228-bab0-689f39c2cee7", 00:04:43.563 "assigned_rate_limits": { 00:04:43.563 "rw_ios_per_sec": 0, 00:04:43.563 "rw_mbytes_per_sec": 0, 00:04:43.563 "r_mbytes_per_sec": 0, 00:04:43.563 "w_mbytes_per_sec": 0 00:04:43.563 }, 00:04:43.563 "claimed": false, 00:04:43.563 "zoned": false, 00:04:43.563 "supported_io_types": { 00:04:43.563 "read": true, 00:04:43.563 "write": true, 00:04:43.563 "unmap": true, 00:04:43.563 "write_zeroes": true, 00:04:43.563 "flush": true, 00:04:43.563 "reset": true, 00:04:43.563 "compare": false, 00:04:43.563 "compare_and_write": false, 00:04:43.563 "abort": true, 00:04:43.563 "nvme_admin": false, 00:04:43.563 "nvme_io": false 00:04:43.563 }, 00:04:43.563 "memory_domains": [ 00:04:43.563 { 00:04:43.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.563 "dma_device_type": 2 00:04:43.563 } 00:04:43.563 ], 00:04:43.563 "driver_specific": {} 00:04:43.563 } 00:04:43.563 ]' 00:04:43.563 07:23:47 -- rpc/rpc.sh@32 -- # jq length 00:04:43.563 07:23:47 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:43.563 07:23:47 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:43.563 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:43.563 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.563 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:43.563 07:23:47 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:43.563 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:43.563 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.563 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:43.563 07:23:47 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:43.563 07:23:47 -- rpc/rpc.sh@36 -- # jq length 00:04:43.823 07:23:47 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:43.823 00:04:43.823 real 0m0.118s 00:04:43.823 user 0m0.071s 00:04:43.823 sys 0m0.016s 00:04:43.823 07:23:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.823 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.823 ************************************ 00:04:43.823 END TEST rpc_plugins 00:04:43.823 ************************************ 00:04:43.823 07:23:47 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:43.823 07:23:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.823 07:23:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.823 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.823 ************************************ 00:04:43.823 START TEST rpc_trace_cmd_test 00:04:43.823 ************************************ 00:04:43.823 07:23:47 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:43.823 07:23:47 -- rpc/rpc.sh@40 -- # local info 00:04:43.823 07:23:47 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:43.823 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:43.823 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:43.823 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:43.823 07:23:47 -- rpc/rpc.sh@42 -- # info='{ 00:04:43.823 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3940086", 00:04:43.823 "tpoint_group_mask": "0x8", 00:04:43.823 "iscsi_conn": { 00:04:43.823 "mask": "0x2", 00:04:43.823 "tpoint_mask": "0x0" 00:04:43.823 }, 00:04:43.823 "scsi": { 00:04:43.823 "mask": "0x4", 00:04:43.823 "tpoint_mask": "0x0" 00:04:43.823 }, 00:04:43.823 "bdev": { 00:04:43.823 "mask": "0x8", 00:04:43.823 "tpoint_mask": "0xffffffffffffffff" 00:04:43.823 }, 00:04:43.824 "nvmf_rdma": { 00:04:43.824 "mask": "0x10", 00:04:43.824 "tpoint_mask": "0x0" 00:04:43.824 }, 00:04:43.824 "nvmf_tcp": { 00:04:43.824 "mask": "0x20", 00:04:43.824 "tpoint_mask": "0x0" 00:04:43.824 }, 00:04:43.824 "ftl": { 00:04:43.824 "mask": "0x40", 00:04:43.824 "tpoint_mask": "0x0" 00:04:43.824 }, 00:04:43.824 "blobfs": { 00:04:43.824 "mask": "0x80", 00:04:43.824 "tpoint_mask": "0x0" 00:04:43.824 }, 00:04:43.824 "dsa": { 00:04:43.824 "mask": "0x200", 00:04:43.824 "tpoint_mask": "0x0" 00:04:43.824 }, 00:04:43.824 "thread": { 00:04:43.824 "mask": "0x400", 00:04:43.824 "tpoint_mask": "0x0" 00:04:43.824 }, 00:04:43.824 "nvme_pcie": { 00:04:43.824 "mask": "0x800", 00:04:43.824 "tpoint_mask": "0x0" 00:04:43.824 }, 00:04:43.824 "iaa": { 00:04:43.824 "mask": "0x1000", 00:04:43.824 "tpoint_mask": "0x0" 00:04:43.824 }, 00:04:43.824 "nvme_tcp": { 00:04:43.824 "mask": "0x2000", 00:04:43.824 "tpoint_mask": "0x0" 00:04:43.824 }, 00:04:43.824 "bdev_nvme": { 00:04:43.824 "mask": "0x4000", 00:04:43.824 "tpoint_mask": "0x0" 00:04:43.824 } 00:04:43.824 }' 00:04:43.824 07:23:47 -- rpc/rpc.sh@43 -- # jq length 00:04:43.824 07:23:47 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:43.824 07:23:47 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:43.824 07:23:47 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:43.824 07:23:47 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:43.824 07:23:47 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:43.824 07:23:47 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:43.824 07:23:47 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:43.824 07:23:47 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:44.084 07:23:47 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:44.084 00:04:44.084 real 0m0.198s 00:04:44.084 user 0m0.165s 00:04:44.084 sys 0m0.026s 00:04:44.084 07:23:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.084 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.084 ************************************ 00:04:44.084 END TEST rpc_trace_cmd_test 00:04:44.084 ************************************ 00:04:44.084 07:23:47 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:44.084 07:23:47 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:44.084 07:23:47 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:44.084 07:23:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.084 07:23:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.084 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.084 ************************************ 00:04:44.084 START TEST rpc_daemon_integrity 00:04:44.084 ************************************ 00:04:44.084 07:23:47 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:44.084 07:23:47 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.084 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.084 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.084 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.084 07:23:47 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.084 07:23:47 -- rpc/rpc.sh@13 -- # jq length 00:04:44.084 07:23:47 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.084 07:23:47 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.084 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.084 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.084 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.084 07:23:47 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:44.084 07:23:47 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.084 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.084 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.084 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.084 07:23:47 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.084 { 00:04:44.084 "name": "Malloc2", 00:04:44.084 "aliases": [ 00:04:44.084 "6084d58e-11c0-47ee-818b-f7a7870ee437" 00:04:44.084 ], 00:04:44.084 "product_name": "Malloc disk", 00:04:44.084 "block_size": 512, 00:04:44.084 "num_blocks": 16384, 00:04:44.084 "uuid": "6084d58e-11c0-47ee-818b-f7a7870ee437", 00:04:44.084 "assigned_rate_limits": { 00:04:44.084 "rw_ios_per_sec": 0, 00:04:44.084 "rw_mbytes_per_sec": 0, 00:04:44.084 "r_mbytes_per_sec": 0, 00:04:44.084 "w_mbytes_per_sec": 0 00:04:44.084 }, 00:04:44.084 "claimed": false, 00:04:44.084 "zoned": false, 00:04:44.084 "supported_io_types": { 00:04:44.084 "read": true, 00:04:44.084 "write": true, 00:04:44.084 "unmap": true, 00:04:44.084 "write_zeroes": true, 00:04:44.084 "flush": true, 00:04:44.084 "reset": true, 00:04:44.084 "compare": false, 00:04:44.084 "compare_and_write": false, 00:04:44.084 "abort": true, 00:04:44.084 "nvme_admin": false, 00:04:44.084 "nvme_io": false 00:04:44.084 }, 00:04:44.084 "memory_domains": [ 00:04:44.084 { 00:04:44.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.084 "dma_device_type": 2 00:04:44.084 } 00:04:44.084 ], 00:04:44.084 "driver_specific": {} 00:04:44.084 } 00:04:44.084 ]' 00:04:44.084 07:23:47 -- rpc/rpc.sh@17 -- # jq length 00:04:44.084 07:23:47 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.084 07:23:47 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:44.084 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.084 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.084 [2024-10-07 07:23:47.961690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:44.084 [2024-10-07 07:23:47.961716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.084 [2024-10-07 07:23:47.961728] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x266f430 00:04:44.084 [2024-10-07 07:23:47.961734] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.084 [2024-10-07 07:23:47.962659] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.084 [2024-10-07 07:23:47.962680] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.084 Passthru0 00:04:44.084 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.084 07:23:47 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.084 07:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.084 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.084 07:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.084 07:23:47 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.084 { 00:04:44.084 "name": "Malloc2", 00:04:44.084 "aliases": [ 00:04:44.084 "6084d58e-11c0-47ee-818b-f7a7870ee437" 00:04:44.084 ], 00:04:44.084 "product_name": "Malloc disk", 00:04:44.084 "block_size": 512, 00:04:44.084 "num_blocks": 16384, 00:04:44.084 "uuid": "6084d58e-11c0-47ee-818b-f7a7870ee437", 00:04:44.084 "assigned_rate_limits": { 00:04:44.084 "rw_ios_per_sec": 0, 00:04:44.084 "rw_mbytes_per_sec": 0, 00:04:44.084 "r_mbytes_per_sec": 0, 00:04:44.084 "w_mbytes_per_sec": 0 00:04:44.084 }, 00:04:44.084 "claimed": true, 00:04:44.084 "claim_type": "exclusive_write", 00:04:44.084 "zoned": false, 00:04:44.084 "supported_io_types": { 00:04:44.084 "read": true, 00:04:44.084 "write": true, 00:04:44.084 "unmap": true, 00:04:44.084 "write_zeroes": true, 00:04:44.084 "flush": true, 00:04:44.084 "reset": true, 00:04:44.084 "compare": false, 00:04:44.084 "compare_and_write": false, 00:04:44.084 "abort": true, 00:04:44.084 "nvme_admin": false, 00:04:44.084 "nvme_io": false 00:04:44.084 }, 00:04:44.084 "memory_domains": [ 00:04:44.084 { 00:04:44.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.084 "dma_device_type": 2 00:04:44.084 } 00:04:44.084 ], 00:04:44.084 "driver_specific": {} 00:04:44.084 }, 00:04:44.084 { 00:04:44.084 "name": "Passthru0", 00:04:44.084 "aliases": [ 00:04:44.084 "bdb08db1-10ec-5b55-8c1d-e99a59f9c083" 00:04:44.084 ], 00:04:44.084 "product_name": "passthru", 00:04:44.084 "block_size": 512, 00:04:44.084 "num_blocks": 16384, 00:04:44.084 "uuid": "bdb08db1-10ec-5b55-8c1d-e99a59f9c083", 00:04:44.084 "assigned_rate_limits": { 00:04:44.084 "rw_ios_per_sec": 0, 00:04:44.084 "rw_mbytes_per_sec": 0, 00:04:44.084 "r_mbytes_per_sec": 0, 00:04:44.084 "w_mbytes_per_sec": 0 00:04:44.085 }, 00:04:44.085 "claimed": false, 00:04:44.085 "zoned": false, 00:04:44.085 "supported_io_types": { 00:04:44.085 "read": true, 00:04:44.085 "write": true, 00:04:44.085 "unmap": true, 00:04:44.085 "write_zeroes": true, 00:04:44.085 "flush": true, 00:04:44.085 "reset": true, 00:04:44.085 "compare": false, 00:04:44.085 "compare_and_write": false, 00:04:44.085 "abort": true, 00:04:44.085 "nvme_admin": false, 00:04:44.085 "nvme_io": false 00:04:44.085 }, 00:04:44.085 "memory_domains": [ 00:04:44.085 { 00:04:44.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.085 "dma_device_type": 2 00:04:44.085 } 00:04:44.085 ], 00:04:44.085 "driver_specific": { 00:04:44.085 "passthru": { 00:04:44.085 "name": "Passthru0", 00:04:44.085 "base_bdev_name": "Malloc2" 00:04:44.085 } 00:04:44.085 } 00:04:44.085 } 00:04:44.085 ]' 00:04:44.085 07:23:47 -- rpc/rpc.sh@21 -- # jq length 00:04:44.085 07:23:48 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.085 07:23:48 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.085 07:23:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.085 07:23:48 -- common/autotest_common.sh@10 -- # set +x 00:04:44.085 07:23:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.085 07:23:48 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:44.085 07:23:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.085 07:23:48 -- common/autotest_common.sh@10 -- # set +x 00:04:44.085 07:23:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.085 07:23:48 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.085 07:23:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:44.085 07:23:48 -- common/autotest_common.sh@10 -- # set +x 00:04:44.085 07:23:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:44.085 07:23:48 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.085 07:23:48 -- rpc/rpc.sh@26 -- # jq length 00:04:44.344 07:23:48 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.344 00:04:44.344 real 0m0.254s 00:04:44.344 user 0m0.165s 00:04:44.344 sys 0m0.029s 00:04:44.344 07:23:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.344 07:23:48 -- common/autotest_common.sh@10 -- # set +x 00:04:44.344 ************************************ 00:04:44.344 END TEST rpc_daemon_integrity 00:04:44.344 ************************************ 00:04:44.344 07:23:48 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.344 07:23:48 -- rpc/rpc.sh@84 -- # killprocess 3940086 00:04:44.344 07:23:48 -- common/autotest_common.sh@926 -- # '[' -z 3940086 ']' 00:04:44.344 07:23:48 -- common/autotest_common.sh@930 -- # kill -0 3940086 00:04:44.344 07:23:48 -- common/autotest_common.sh@931 -- # uname 00:04:44.344 07:23:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:44.344 07:23:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3940086 00:04:44.344 07:23:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:44.344 07:23:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:44.344 07:23:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3940086' 00:04:44.344 killing process with pid 3940086 00:04:44.344 07:23:48 -- common/autotest_common.sh@945 -- # kill 3940086 00:04:44.344 07:23:48 -- common/autotest_common.sh@950 -- # wait 3940086 00:04:44.605 00:04:44.605 real 0m2.255s 00:04:44.605 user 0m2.864s 00:04:44.605 sys 0m0.583s 00:04:44.605 07:23:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.605 07:23:48 -- common/autotest_common.sh@10 -- # set +x 00:04:44.605 ************************************ 00:04:44.605 END TEST rpc 00:04:44.605 ************************************ 00:04:44.605 07:23:48 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:44.605 07:23:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.605 07:23:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.605 07:23:48 -- common/autotest_common.sh@10 -- # set +x 00:04:44.605 ************************************ 00:04:44.605 START TEST rpc_client 00:04:44.605 ************************************ 00:04:44.605 07:23:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:44.866 * Looking for test storage... 00:04:44.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:44.866 07:23:48 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:44.866 OK 00:04:44.866 07:23:48 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:44.866 00:04:44.866 real 0m0.093s 00:04:44.866 user 0m0.048s 00:04:44.866 sys 0m0.052s 00:04:44.867 07:23:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.867 07:23:48 -- common/autotest_common.sh@10 -- # set +x 00:04:44.867 ************************************ 00:04:44.867 END TEST rpc_client 00:04:44.867 ************************************ 00:04:44.867 07:23:48 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:44.867 07:23:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.867 07:23:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.867 07:23:48 -- common/autotest_common.sh@10 -- # set +x 00:04:44.867 ************************************ 00:04:44.867 START TEST json_config 00:04:44.867 ************************************ 00:04:44.867 07:23:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:44.867 07:23:48 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:44.867 07:23:48 -- nvmf/common.sh@7 -- # uname -s 00:04:44.867 07:23:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.867 07:23:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.867 07:23:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.867 07:23:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.867 07:23:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.867 07:23:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.867 07:23:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.867 07:23:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.867 07:23:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.867 07:23:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.867 07:23:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:44.867 07:23:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:44.867 07:23:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.867 07:23:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.867 07:23:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.867 07:23:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:44.867 07:23:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.867 07:23:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.867 07:23:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.867 07:23:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.867 07:23:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.867 07:23:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.867 07:23:48 -- paths/export.sh@5 -- # export PATH 00:04:44.867 07:23:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.867 07:23:48 -- nvmf/common.sh@46 -- # : 0 00:04:44.867 07:23:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:44.867 07:23:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:44.867 07:23:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:44.867 07:23:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.867 07:23:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.867 07:23:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:44.867 07:23:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:44.867 07:23:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:44.867 07:23:48 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:44.867 07:23:48 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:44.867 07:23:48 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:44.867 07:23:48 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:44.867 07:23:48 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:44.867 07:23:48 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:44.867 07:23:48 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:44.867 07:23:48 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:44.867 07:23:48 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:44.867 07:23:48 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:44.867 07:23:48 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:44.867 07:23:48 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:44.867 07:23:48 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:44.867 07:23:48 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:44.867 07:23:48 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:44.867 INFO: JSON configuration test init 00:04:44.867 07:23:48 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:44.867 07:23:48 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:44.867 07:23:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:44.867 07:23:48 -- common/autotest_common.sh@10 -- # set +x 00:04:44.867 07:23:48 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:44.867 07:23:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:44.867 07:23:48 -- common/autotest_common.sh@10 -- # set +x 00:04:44.867 07:23:48 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:44.867 07:23:48 -- json_config/json_config.sh@98 -- # local app=target 00:04:44.867 07:23:48 -- json_config/json_config.sh@99 -- # shift 00:04:44.867 07:23:48 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:44.867 07:23:48 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:44.867 07:23:48 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:44.867 07:23:48 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:44.867 07:23:48 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:44.867 07:23:48 -- json_config/json_config.sh@111 -- # app_pid[$app]=3940749 00:04:44.867 07:23:48 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:44.867 Waiting for target to run... 00:04:44.867 07:23:48 -- json_config/json_config.sh@114 -- # waitforlisten 3940749 /var/tmp/spdk_tgt.sock 00:04:44.867 07:23:48 -- common/autotest_common.sh@819 -- # '[' -z 3940749 ']' 00:04:44.867 07:23:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:44.867 07:23:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:44.867 07:23:48 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:44.867 07:23:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:44.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:44.867 07:23:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:44.867 07:23:48 -- common/autotest_common.sh@10 -- # set +x 00:04:44.867 [2024-10-07 07:23:48.816831] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:44.867 [2024-10-07 07:23:48.816881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940749 ] 00:04:45.127 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.127 [2024-10-07 07:23:49.092105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.386 [2024-10-07 07:23:49.155657] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:45.386 [2024-10-07 07:23:49.155751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.954 07:23:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:45.954 07:23:49 -- common/autotest_common.sh@852 -- # return 0 00:04:45.954 07:23:49 -- json_config/json_config.sh@115 -- # echo '' 00:04:45.954 00:04:45.954 07:23:49 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:45.954 07:23:49 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:45.954 07:23:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:45.954 07:23:49 -- common/autotest_common.sh@10 -- # set +x 00:04:45.954 07:23:49 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:45.954 07:23:49 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:45.954 07:23:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:45.954 07:23:49 -- common/autotest_common.sh@10 -- # set +x 00:04:45.954 07:23:49 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:45.954 07:23:49 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:45.954 07:23:49 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:49.247 07:23:52 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:49.247 07:23:52 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:49.247 07:23:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:49.247 07:23:52 -- common/autotest_common.sh@10 -- # set +x 00:04:49.247 07:23:52 -- json_config/json_config.sh@48 -- # local ret=0 00:04:49.247 07:23:52 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:49.247 07:23:52 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:49.247 07:23:52 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:49.247 07:23:52 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:49.247 07:23:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:49.247 07:23:52 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:49.247 07:23:52 -- json_config/json_config.sh@51 -- # local get_types 00:04:49.247 07:23:52 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:49.247 07:23:52 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:49.247 07:23:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:49.247 07:23:52 -- common/autotest_common.sh@10 -- # set +x 00:04:49.247 07:23:52 -- json_config/json_config.sh@58 -- # return 0 00:04:49.247 07:23:52 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:49.247 07:23:52 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:49.247 07:23:52 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:49.247 07:23:52 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:49.247 07:23:52 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:49.247 07:23:52 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:49.247 07:23:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:49.247 07:23:52 -- common/autotest_common.sh@10 -- # set +x 00:04:49.247 07:23:52 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:49.247 07:23:52 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:49.247 07:23:52 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:49.247 07:23:52 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:49.247 07:23:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:49.247 MallocForNvmf0 00:04:49.247 07:23:53 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:49.247 07:23:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:49.507 MallocForNvmf1 00:04:49.507 07:23:53 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:49.507 07:23:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:49.507 [2024-10-07 07:23:53.476938] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:49.766 07:23:53 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:49.766 07:23:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:49.766 07:23:53 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:49.766 07:23:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:50.025 07:23:53 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:50.025 07:23:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:50.284 07:23:54 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:50.285 07:23:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:50.285 [2024-10-07 07:23:54.211193] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:50.285 07:23:54 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:50.285 07:23:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:50.285 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:04:50.544 07:23:54 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:50.544 07:23:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:50.544 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:04:50.544 07:23:54 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:50.544 07:23:54 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:50.544 07:23:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:50.544 MallocBdevForConfigChangeCheck 00:04:50.544 07:23:54 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:50.544 07:23:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:50.544 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:04:50.544 07:23:54 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:50.544 07:23:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:51.112 07:23:54 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:51.112 INFO: shutting down applications... 00:04:51.112 07:23:54 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:51.112 07:23:54 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:51.112 07:23:54 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:51.112 07:23:54 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:52.491 Calling clear_iscsi_subsystem 00:04:52.491 Calling clear_nvmf_subsystem 00:04:52.492 Calling clear_nbd_subsystem 00:04:52.492 Calling clear_ublk_subsystem 00:04:52.492 Calling clear_vhost_blk_subsystem 00:04:52.492 Calling clear_vhost_scsi_subsystem 00:04:52.492 Calling clear_scheduler_subsystem 00:04:52.492 Calling clear_bdev_subsystem 00:04:52.492 Calling clear_accel_subsystem 00:04:52.492 Calling clear_vmd_subsystem 00:04:52.492 Calling clear_sock_subsystem 00:04:52.492 Calling clear_iobuf_subsystem 00:04:52.492 07:23:56 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:52.492 07:23:56 -- json_config/json_config.sh@396 -- # count=100 00:04:52.492 07:23:56 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:52.492 07:23:56 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:52.492 07:23:56 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:52.492 07:23:56 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:52.751 07:23:56 -- json_config/json_config.sh@398 -- # break 00:04:52.751 07:23:56 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:52.751 07:23:56 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:52.751 07:23:56 -- json_config/json_config.sh@120 -- # local app=target 00:04:52.751 07:23:56 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:52.751 07:23:56 -- json_config/json_config.sh@124 -- # [[ -n 3940749 ]] 00:04:52.751 07:23:56 -- json_config/json_config.sh@127 -- # kill -SIGINT 3940749 00:04:52.751 07:23:56 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:52.751 07:23:56 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:52.751 07:23:56 -- json_config/json_config.sh@130 -- # kill -0 3940749 00:04:52.751 07:23:56 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:53.353 07:23:57 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:53.353 07:23:57 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:53.353 07:23:57 -- json_config/json_config.sh@130 -- # kill -0 3940749 00:04:53.353 07:23:57 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:53.353 07:23:57 -- json_config/json_config.sh@132 -- # break 00:04:53.353 07:23:57 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:53.353 07:23:57 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:53.353 SPDK target shutdown done 00:04:53.353 07:23:57 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:53.353 INFO: relaunching applications... 00:04:53.353 07:23:57 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.353 07:23:57 -- json_config/json_config.sh@98 -- # local app=target 00:04:53.353 07:23:57 -- json_config/json_config.sh@99 -- # shift 00:04:53.353 07:23:57 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:53.353 07:23:57 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:53.353 07:23:57 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:53.353 07:23:57 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:53.353 07:23:57 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:53.353 07:23:57 -- json_config/json_config.sh@111 -- # app_pid[$app]=3942247 00:04:53.353 07:23:57 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:53.353 Waiting for target to run... 00:04:53.353 07:23:57 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.353 07:23:57 -- json_config/json_config.sh@114 -- # waitforlisten 3942247 /var/tmp/spdk_tgt.sock 00:04:53.353 07:23:57 -- common/autotest_common.sh@819 -- # '[' -z 3942247 ']' 00:04:53.353 07:23:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.353 07:23:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:53.353 07:23:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.353 07:23:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:53.353 07:23:57 -- common/autotest_common.sh@10 -- # set +x 00:04:53.353 [2024-10-07 07:23:57.243198] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:53.353 [2024-10-07 07:23:57.243247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3942247 ] 00:04:53.353 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.921 [2024-10-07 07:23:57.684652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.921 [2024-10-07 07:23:57.769547] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:53.921 [2024-10-07 07:23:57.769652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.214 [2024-10-07 07:24:00.766845] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:57.214 [2024-10-07 07:24:00.799155] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:57.473 07:24:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:57.473 07:24:01 -- common/autotest_common.sh@852 -- # return 0 00:04:57.473 07:24:01 -- json_config/json_config.sh@115 -- # echo '' 00:04:57.473 00:04:57.473 07:24:01 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:57.473 07:24:01 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:57.473 INFO: Checking if target configuration is the same... 00:04:57.473 07:24:01 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.473 07:24:01 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:57.473 07:24:01 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.473 + '[' 2 -ne 2 ']' 00:04:57.473 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:57.473 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:57.473 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:57.473 +++ basename /dev/fd/62 00:04:57.473 ++ mktemp /tmp/62.XXX 00:04:57.473 + tmp_file_1=/tmp/62.wsY 00:04:57.473 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.473 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:57.473 + tmp_file_2=/tmp/spdk_tgt_config.json.H73 00:04:57.473 + ret=0 00:04:57.473 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.041 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.041 + diff -u /tmp/62.wsY /tmp/spdk_tgt_config.json.H73 00:04:58.041 + echo 'INFO: JSON config files are the same' 00:04:58.041 INFO: JSON config files are the same 00:04:58.041 + rm /tmp/62.wsY /tmp/spdk_tgt_config.json.H73 00:04:58.041 + exit 0 00:04:58.041 07:24:01 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:58.041 07:24:01 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:58.041 INFO: changing configuration and checking if this can be detected... 00:04:58.041 07:24:01 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:58.041 07:24:01 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:58.041 07:24:01 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:58.041 07:24:01 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:58.041 07:24:01 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.041 + '[' 2 -ne 2 ']' 00:04:58.041 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:58.041 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:58.041 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:58.041 +++ basename /dev/fd/62 00:04:58.041 ++ mktemp /tmp/62.XXX 00:04:58.041 + tmp_file_1=/tmp/62.Wu8 00:04:58.041 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.041 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:58.041 + tmp_file_2=/tmp/spdk_tgt_config.json.7Az 00:04:58.041 + ret=0 00:04:58.041 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.300 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.559 + diff -u /tmp/62.Wu8 /tmp/spdk_tgt_config.json.7Az 00:04:58.559 + ret=1 00:04:58.559 + echo '=== Start of file: /tmp/62.Wu8 ===' 00:04:58.559 + cat /tmp/62.Wu8 00:04:58.559 + echo '=== End of file: /tmp/62.Wu8 ===' 00:04:58.559 + echo '' 00:04:58.559 + echo '=== Start of file: /tmp/spdk_tgt_config.json.7Az ===' 00:04:58.559 + cat /tmp/spdk_tgt_config.json.7Az 00:04:58.559 + echo '=== End of file: /tmp/spdk_tgt_config.json.7Az ===' 00:04:58.559 + echo '' 00:04:58.559 + rm /tmp/62.Wu8 /tmp/spdk_tgt_config.json.7Az 00:04:58.559 + exit 1 00:04:58.559 07:24:02 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:58.559 INFO: configuration change detected. 00:04:58.559 07:24:02 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:58.559 07:24:02 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:58.559 07:24:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:58.559 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:04:58.559 07:24:02 -- json_config/json_config.sh@360 -- # local ret=0 00:04:58.559 07:24:02 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:58.559 07:24:02 -- json_config/json_config.sh@370 -- # [[ -n 3942247 ]] 00:04:58.559 07:24:02 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:58.559 07:24:02 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:58.559 07:24:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:58.559 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:04:58.559 07:24:02 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:58.559 07:24:02 -- json_config/json_config.sh@246 -- # uname -s 00:04:58.559 07:24:02 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:58.559 07:24:02 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:58.559 07:24:02 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:58.559 07:24:02 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:58.559 07:24:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:58.559 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:04:58.559 07:24:02 -- json_config/json_config.sh@376 -- # killprocess 3942247 00:04:58.559 07:24:02 -- common/autotest_common.sh@926 -- # '[' -z 3942247 ']' 00:04:58.559 07:24:02 -- common/autotest_common.sh@930 -- # kill -0 3942247 00:04:58.559 07:24:02 -- common/autotest_common.sh@931 -- # uname 00:04:58.559 07:24:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:58.559 07:24:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3942247 00:04:58.559 07:24:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:58.559 07:24:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:58.559 07:24:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3942247' 00:04:58.559 killing process with pid 3942247 00:04:58.559 07:24:02 -- common/autotest_common.sh@945 -- # kill 3942247 00:04:58.559 07:24:02 -- common/autotest_common.sh@950 -- # wait 3942247 00:05:00.466 07:24:03 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.466 07:24:03 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:00.466 07:24:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:00.466 07:24:03 -- common/autotest_common.sh@10 -- # set +x 00:05:00.466 07:24:03 -- json_config/json_config.sh@381 -- # return 0 00:05:00.466 07:24:03 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:00.466 INFO: Success 00:05:00.466 00:05:00.466 real 0m15.273s 00:05:00.466 user 0m16.625s 00:05:00.466 sys 0m1.940s 00:05:00.466 07:24:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.466 07:24:03 -- common/autotest_common.sh@10 -- # set +x 00:05:00.466 ************************************ 00:05:00.466 END TEST json_config 00:05:00.466 ************************************ 00:05:00.466 07:24:03 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:00.466 07:24:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:00.466 07:24:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:00.466 07:24:03 -- common/autotest_common.sh@10 -- # set +x 00:05:00.466 ************************************ 00:05:00.466 START TEST json_config_extra_key 00:05:00.466 ************************************ 00:05:00.466 07:24:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:00.466 07:24:04 -- nvmf/common.sh@7 -- # uname -s 00:05:00.466 07:24:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.466 07:24:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.466 07:24:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.466 07:24:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.466 07:24:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.466 07:24:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.466 07:24:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.466 07:24:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.466 07:24:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.466 07:24:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.466 07:24:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:00.466 07:24:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:00.466 07:24:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.466 07:24:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.466 07:24:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.466 07:24:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:00.466 07:24:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.466 07:24:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.466 07:24:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.466 07:24:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.466 07:24:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.466 07:24:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.466 07:24:04 -- paths/export.sh@5 -- # export PATH 00:05:00.466 07:24:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.466 07:24:04 -- nvmf/common.sh@46 -- # : 0 00:05:00.466 07:24:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:00.466 07:24:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:00.466 07:24:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:00.466 07:24:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.466 07:24:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.466 07:24:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:00.466 07:24:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:00.466 07:24:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:00.466 INFO: launching applications... 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3943631 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:00.466 Waiting for target to run... 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3943631 /var/tmp/spdk_tgt.sock 00:05:00.466 07:24:04 -- common/autotest_common.sh@819 -- # '[' -z 3943631 ']' 00:05:00.466 07:24:04 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:00.466 07:24:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:00.466 07:24:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:00.466 07:24:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:00.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:00.466 07:24:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:00.466 07:24:04 -- common/autotest_common.sh@10 -- # set +x 00:05:00.466 [2024-10-07 07:24:04.122339] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:00.466 [2024-10-07 07:24:04.122392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3943631 ] 00:05:00.466 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.726 [2024-10-07 07:24:04.561115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.726 [2024-10-07 07:24:04.646687] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:00.726 [2024-10-07 07:24:04.646789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.986 07:24:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:00.986 07:24:04 -- common/autotest_common.sh@852 -- # return 0 00:05:00.986 07:24:04 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:00.986 00:05:00.986 07:24:04 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:00.986 INFO: shutting down applications... 00:05:00.986 07:24:04 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:00.986 07:24:04 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:00.986 07:24:04 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:00.986 07:24:04 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3943631 ]] 00:05:00.986 07:24:04 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3943631 00:05:00.986 07:24:04 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:00.986 07:24:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:00.986 07:24:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3943631 00:05:00.986 07:24:04 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:01.556 07:24:05 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:01.556 07:24:05 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:01.556 07:24:05 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3943631 00:05:01.556 07:24:05 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:01.556 07:24:05 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:01.556 07:24:05 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:01.556 07:24:05 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:01.556 SPDK target shutdown done 00:05:01.556 07:24:05 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:01.556 Success 00:05:01.556 00:05:01.556 real 0m1.464s 00:05:01.556 user 0m1.148s 00:05:01.556 sys 0m0.508s 00:05:01.556 07:24:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.556 07:24:05 -- common/autotest_common.sh@10 -- # set +x 00:05:01.556 ************************************ 00:05:01.556 END TEST json_config_extra_key 00:05:01.556 ************************************ 00:05:01.556 07:24:05 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.556 07:24:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:01.556 07:24:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:01.556 07:24:05 -- common/autotest_common.sh@10 -- # set +x 00:05:01.556 ************************************ 00:05:01.556 START TEST alias_rpc 00:05:01.556 ************************************ 00:05:01.556 07:24:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.815 * Looking for test storage... 00:05:01.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:01.815 07:24:05 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:01.815 07:24:05 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3943914 00:05:01.815 07:24:05 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3943914 00:05:01.815 07:24:05 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.815 07:24:05 -- common/autotest_common.sh@819 -- # '[' -z 3943914 ']' 00:05:01.815 07:24:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.815 07:24:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:01.815 07:24:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.815 07:24:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:01.815 07:24:05 -- common/autotest_common.sh@10 -- # set +x 00:05:01.815 [2024-10-07 07:24:05.618391] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:01.815 [2024-10-07 07:24:05.618450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3943914 ] 00:05:01.815 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.815 [2024-10-07 07:24:05.673247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.815 [2024-10-07 07:24:05.749308] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:01.815 [2024-10-07 07:24:05.749424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.754 07:24:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:02.754 07:24:06 -- common/autotest_common.sh@852 -- # return 0 00:05:02.754 07:24:06 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:02.754 07:24:06 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3943914 00:05:02.754 07:24:06 -- common/autotest_common.sh@926 -- # '[' -z 3943914 ']' 00:05:02.754 07:24:06 -- common/autotest_common.sh@930 -- # kill -0 3943914 00:05:02.754 07:24:06 -- common/autotest_common.sh@931 -- # uname 00:05:02.754 07:24:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:02.754 07:24:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3943914 00:05:02.754 07:24:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:02.754 07:24:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:02.754 07:24:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3943914' 00:05:02.754 killing process with pid 3943914 00:05:02.754 07:24:06 -- common/autotest_common.sh@945 -- # kill 3943914 00:05:02.754 07:24:06 -- common/autotest_common.sh@950 -- # wait 3943914 00:05:03.324 00:05:03.324 real 0m1.532s 00:05:03.324 user 0m1.718s 00:05:03.324 sys 0m0.379s 00:05:03.324 07:24:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.324 07:24:07 -- common/autotest_common.sh@10 -- # set +x 00:05:03.324 ************************************ 00:05:03.324 END TEST alias_rpc 00:05:03.324 ************************************ 00:05:03.324 07:24:07 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:03.324 07:24:07 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:03.324 07:24:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:03.324 07:24:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.324 07:24:07 -- common/autotest_common.sh@10 -- # set +x 00:05:03.324 ************************************ 00:05:03.324 START TEST spdkcli_tcp 00:05:03.324 ************************************ 00:05:03.324 07:24:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:03.324 * Looking for test storage... 00:05:03.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:03.324 07:24:07 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:03.324 07:24:07 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:03.324 07:24:07 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:03.324 07:24:07 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:03.324 07:24:07 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:03.324 07:24:07 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:03.324 07:24:07 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:03.324 07:24:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:03.324 07:24:07 -- common/autotest_common.sh@10 -- # set +x 00:05:03.324 07:24:07 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3944305 00:05:03.324 07:24:07 -- spdkcli/tcp.sh@27 -- # waitforlisten 3944305 00:05:03.324 07:24:07 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:03.324 07:24:07 -- common/autotest_common.sh@819 -- # '[' -z 3944305 ']' 00:05:03.324 07:24:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.324 07:24:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:03.324 07:24:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.324 07:24:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:03.324 07:24:07 -- common/autotest_common.sh@10 -- # set +x 00:05:03.324 [2024-10-07 07:24:07.192025] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:03.324 [2024-10-07 07:24:07.192087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3944305 ] 00:05:03.324 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.324 [2024-10-07 07:24:07.247424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.583 [2024-10-07 07:24:07.324027] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:03.583 [2024-10-07 07:24:07.324174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.583 [2024-10-07 07:24:07.324177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.152 07:24:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:04.152 07:24:08 -- common/autotest_common.sh@852 -- # return 0 00:05:04.152 07:24:08 -- spdkcli/tcp.sh@31 -- # socat_pid=3944499 00:05:04.152 07:24:08 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:04.152 07:24:08 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:04.412 [ 00:05:04.412 "bdev_malloc_delete", 00:05:04.412 "bdev_malloc_create", 00:05:04.412 "bdev_null_resize", 00:05:04.412 "bdev_null_delete", 00:05:04.412 "bdev_null_create", 00:05:04.412 "bdev_nvme_cuse_unregister", 00:05:04.412 "bdev_nvme_cuse_register", 00:05:04.412 "bdev_opal_new_user", 00:05:04.412 "bdev_opal_set_lock_state", 00:05:04.412 "bdev_opal_delete", 00:05:04.412 "bdev_opal_get_info", 00:05:04.412 "bdev_opal_create", 00:05:04.412 "bdev_nvme_opal_revert", 00:05:04.412 "bdev_nvme_opal_init", 00:05:04.412 "bdev_nvme_send_cmd", 00:05:04.412 "bdev_nvme_get_path_iostat", 00:05:04.412 "bdev_nvme_get_mdns_discovery_info", 00:05:04.412 "bdev_nvme_stop_mdns_discovery", 00:05:04.412 "bdev_nvme_start_mdns_discovery", 00:05:04.412 "bdev_nvme_set_multipath_policy", 00:05:04.412 "bdev_nvme_set_preferred_path", 00:05:04.412 "bdev_nvme_get_io_paths", 00:05:04.412 "bdev_nvme_remove_error_injection", 00:05:04.412 "bdev_nvme_add_error_injection", 00:05:04.412 "bdev_nvme_get_discovery_info", 00:05:04.412 "bdev_nvme_stop_discovery", 00:05:04.412 "bdev_nvme_start_discovery", 00:05:04.412 "bdev_nvme_get_controller_health_info", 00:05:04.412 "bdev_nvme_disable_controller", 00:05:04.412 "bdev_nvme_enable_controller", 00:05:04.412 "bdev_nvme_reset_controller", 00:05:04.412 "bdev_nvme_get_transport_statistics", 00:05:04.412 "bdev_nvme_apply_firmware", 00:05:04.412 "bdev_nvme_detach_controller", 00:05:04.412 "bdev_nvme_get_controllers", 00:05:04.412 "bdev_nvme_attach_controller", 00:05:04.412 "bdev_nvme_set_hotplug", 00:05:04.412 "bdev_nvme_set_options", 00:05:04.412 "bdev_passthru_delete", 00:05:04.412 "bdev_passthru_create", 00:05:04.412 "bdev_lvol_grow_lvstore", 00:05:04.412 "bdev_lvol_get_lvols", 00:05:04.412 "bdev_lvol_get_lvstores", 00:05:04.412 "bdev_lvol_delete", 00:05:04.412 "bdev_lvol_set_read_only", 00:05:04.412 "bdev_lvol_resize", 00:05:04.412 "bdev_lvol_decouple_parent", 00:05:04.412 "bdev_lvol_inflate", 00:05:04.412 "bdev_lvol_rename", 00:05:04.412 "bdev_lvol_clone_bdev", 00:05:04.412 "bdev_lvol_clone", 00:05:04.412 "bdev_lvol_snapshot", 00:05:04.412 "bdev_lvol_create", 00:05:04.412 "bdev_lvol_delete_lvstore", 00:05:04.412 "bdev_lvol_rename_lvstore", 00:05:04.412 "bdev_lvol_create_lvstore", 00:05:04.412 "bdev_raid_set_options", 00:05:04.412 "bdev_raid_remove_base_bdev", 00:05:04.412 "bdev_raid_add_base_bdev", 00:05:04.412 "bdev_raid_delete", 00:05:04.412 "bdev_raid_create", 00:05:04.412 "bdev_raid_get_bdevs", 00:05:04.412 "bdev_error_inject_error", 00:05:04.412 "bdev_error_delete", 00:05:04.412 "bdev_error_create", 00:05:04.412 "bdev_split_delete", 00:05:04.412 "bdev_split_create", 00:05:04.412 "bdev_delay_delete", 00:05:04.412 "bdev_delay_create", 00:05:04.412 "bdev_delay_update_latency", 00:05:04.412 "bdev_zone_block_delete", 00:05:04.412 "bdev_zone_block_create", 00:05:04.412 "blobfs_create", 00:05:04.412 "blobfs_detect", 00:05:04.412 "blobfs_set_cache_size", 00:05:04.412 "bdev_aio_delete", 00:05:04.412 "bdev_aio_rescan", 00:05:04.412 "bdev_aio_create", 00:05:04.412 "bdev_ftl_set_property", 00:05:04.412 "bdev_ftl_get_properties", 00:05:04.412 "bdev_ftl_get_stats", 00:05:04.412 "bdev_ftl_unmap", 00:05:04.412 "bdev_ftl_unload", 00:05:04.412 "bdev_ftl_delete", 00:05:04.412 "bdev_ftl_load", 00:05:04.412 "bdev_ftl_create", 00:05:04.412 "bdev_virtio_attach_controller", 00:05:04.412 "bdev_virtio_scsi_get_devices", 00:05:04.412 "bdev_virtio_detach_controller", 00:05:04.412 "bdev_virtio_blk_set_hotplug", 00:05:04.412 "bdev_iscsi_delete", 00:05:04.412 "bdev_iscsi_create", 00:05:04.412 "bdev_iscsi_set_options", 00:05:04.412 "accel_error_inject_error", 00:05:04.412 "ioat_scan_accel_module", 00:05:04.412 "dsa_scan_accel_module", 00:05:04.412 "iaa_scan_accel_module", 00:05:04.412 "iscsi_set_options", 00:05:04.412 "iscsi_get_auth_groups", 00:05:04.412 "iscsi_auth_group_remove_secret", 00:05:04.412 "iscsi_auth_group_add_secret", 00:05:04.412 "iscsi_delete_auth_group", 00:05:04.412 "iscsi_create_auth_group", 00:05:04.412 "iscsi_set_discovery_auth", 00:05:04.412 "iscsi_get_options", 00:05:04.412 "iscsi_target_node_request_logout", 00:05:04.412 "iscsi_target_node_set_redirect", 00:05:04.412 "iscsi_target_node_set_auth", 00:05:04.412 "iscsi_target_node_add_lun", 00:05:04.412 "iscsi_get_connections", 00:05:04.412 "iscsi_portal_group_set_auth", 00:05:04.412 "iscsi_start_portal_group", 00:05:04.412 "iscsi_delete_portal_group", 00:05:04.412 "iscsi_create_portal_group", 00:05:04.412 "iscsi_get_portal_groups", 00:05:04.412 "iscsi_delete_target_node", 00:05:04.412 "iscsi_target_node_remove_pg_ig_maps", 00:05:04.412 "iscsi_target_node_add_pg_ig_maps", 00:05:04.412 "iscsi_create_target_node", 00:05:04.412 "iscsi_get_target_nodes", 00:05:04.412 "iscsi_delete_initiator_group", 00:05:04.412 "iscsi_initiator_group_remove_initiators", 00:05:04.412 "iscsi_initiator_group_add_initiators", 00:05:04.412 "iscsi_create_initiator_group", 00:05:04.412 "iscsi_get_initiator_groups", 00:05:04.413 "nvmf_set_crdt", 00:05:04.413 "nvmf_set_config", 00:05:04.413 "nvmf_set_max_subsystems", 00:05:04.413 "nvmf_subsystem_get_listeners", 00:05:04.413 "nvmf_subsystem_get_qpairs", 00:05:04.413 "nvmf_subsystem_get_controllers", 00:05:04.413 "nvmf_get_stats", 00:05:04.413 "nvmf_get_transports", 00:05:04.413 "nvmf_create_transport", 00:05:04.413 "nvmf_get_targets", 00:05:04.413 "nvmf_delete_target", 00:05:04.413 "nvmf_create_target", 00:05:04.413 "nvmf_subsystem_allow_any_host", 00:05:04.413 "nvmf_subsystem_remove_host", 00:05:04.413 "nvmf_subsystem_add_host", 00:05:04.413 "nvmf_subsystem_remove_ns", 00:05:04.413 "nvmf_subsystem_add_ns", 00:05:04.413 "nvmf_subsystem_listener_set_ana_state", 00:05:04.413 "nvmf_discovery_get_referrals", 00:05:04.413 "nvmf_discovery_remove_referral", 00:05:04.413 "nvmf_discovery_add_referral", 00:05:04.413 "nvmf_subsystem_remove_listener", 00:05:04.413 "nvmf_subsystem_add_listener", 00:05:04.413 "nvmf_delete_subsystem", 00:05:04.413 "nvmf_create_subsystem", 00:05:04.413 "nvmf_get_subsystems", 00:05:04.413 "env_dpdk_get_mem_stats", 00:05:04.413 "nbd_get_disks", 00:05:04.413 "nbd_stop_disk", 00:05:04.413 "nbd_start_disk", 00:05:04.413 "ublk_recover_disk", 00:05:04.413 "ublk_get_disks", 00:05:04.413 "ublk_stop_disk", 00:05:04.413 "ublk_start_disk", 00:05:04.413 "ublk_destroy_target", 00:05:04.413 "ublk_create_target", 00:05:04.413 "virtio_blk_create_transport", 00:05:04.413 "virtio_blk_get_transports", 00:05:04.413 "vhost_controller_set_coalescing", 00:05:04.413 "vhost_get_controllers", 00:05:04.413 "vhost_delete_controller", 00:05:04.413 "vhost_create_blk_controller", 00:05:04.413 "vhost_scsi_controller_remove_target", 00:05:04.413 "vhost_scsi_controller_add_target", 00:05:04.413 "vhost_start_scsi_controller", 00:05:04.413 "vhost_create_scsi_controller", 00:05:04.413 "thread_set_cpumask", 00:05:04.413 "framework_get_scheduler", 00:05:04.413 "framework_set_scheduler", 00:05:04.413 "framework_get_reactors", 00:05:04.413 "thread_get_io_channels", 00:05:04.413 "thread_get_pollers", 00:05:04.413 "thread_get_stats", 00:05:04.413 "framework_monitor_context_switch", 00:05:04.413 "spdk_kill_instance", 00:05:04.413 "log_enable_timestamps", 00:05:04.413 "log_get_flags", 00:05:04.413 "log_clear_flag", 00:05:04.413 "log_set_flag", 00:05:04.413 "log_get_level", 00:05:04.413 "log_set_level", 00:05:04.413 "log_get_print_level", 00:05:04.413 "log_set_print_level", 00:05:04.413 "framework_enable_cpumask_locks", 00:05:04.413 "framework_disable_cpumask_locks", 00:05:04.413 "framework_wait_init", 00:05:04.413 "framework_start_init", 00:05:04.413 "scsi_get_devices", 00:05:04.413 "bdev_get_histogram", 00:05:04.413 "bdev_enable_histogram", 00:05:04.413 "bdev_set_qos_limit", 00:05:04.413 "bdev_set_qd_sampling_period", 00:05:04.413 "bdev_get_bdevs", 00:05:04.413 "bdev_reset_iostat", 00:05:04.413 "bdev_get_iostat", 00:05:04.413 "bdev_examine", 00:05:04.413 "bdev_wait_for_examine", 00:05:04.413 "bdev_set_options", 00:05:04.413 "notify_get_notifications", 00:05:04.413 "notify_get_types", 00:05:04.413 "accel_get_stats", 00:05:04.413 "accel_set_options", 00:05:04.413 "accel_set_driver", 00:05:04.413 "accel_crypto_key_destroy", 00:05:04.413 "accel_crypto_keys_get", 00:05:04.413 "accel_crypto_key_create", 00:05:04.413 "accel_assign_opc", 00:05:04.413 "accel_get_module_info", 00:05:04.413 "accel_get_opc_assignments", 00:05:04.413 "vmd_rescan", 00:05:04.413 "vmd_remove_device", 00:05:04.413 "vmd_enable", 00:05:04.413 "sock_set_default_impl", 00:05:04.413 "sock_impl_set_options", 00:05:04.413 "sock_impl_get_options", 00:05:04.413 "iobuf_get_stats", 00:05:04.413 "iobuf_set_options", 00:05:04.413 "framework_get_pci_devices", 00:05:04.413 "framework_get_config", 00:05:04.413 "framework_get_subsystems", 00:05:04.413 "trace_get_info", 00:05:04.413 "trace_get_tpoint_group_mask", 00:05:04.413 "trace_disable_tpoint_group", 00:05:04.413 "trace_enable_tpoint_group", 00:05:04.413 "trace_clear_tpoint_mask", 00:05:04.413 "trace_set_tpoint_mask", 00:05:04.413 "spdk_get_version", 00:05:04.413 "rpc_get_methods" 00:05:04.413 ] 00:05:04.413 07:24:08 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:04.413 07:24:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:04.413 07:24:08 -- common/autotest_common.sh@10 -- # set +x 00:05:04.413 07:24:08 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:04.413 07:24:08 -- spdkcli/tcp.sh@38 -- # killprocess 3944305 00:05:04.413 07:24:08 -- common/autotest_common.sh@926 -- # '[' -z 3944305 ']' 00:05:04.413 07:24:08 -- common/autotest_common.sh@930 -- # kill -0 3944305 00:05:04.413 07:24:08 -- common/autotest_common.sh@931 -- # uname 00:05:04.413 07:24:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:04.413 07:24:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3944305 00:05:04.413 07:24:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:04.413 07:24:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:04.413 07:24:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3944305' 00:05:04.413 killing process with pid 3944305 00:05:04.413 07:24:08 -- common/autotest_common.sh@945 -- # kill 3944305 00:05:04.413 07:24:08 -- common/autotest_common.sh@950 -- # wait 3944305 00:05:04.672 00:05:04.672 real 0m1.580s 00:05:04.672 user 0m2.997s 00:05:04.672 sys 0m0.427s 00:05:04.672 07:24:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.672 07:24:08 -- common/autotest_common.sh@10 -- # set +x 00:05:04.672 ************************************ 00:05:04.672 END TEST spdkcli_tcp 00:05:04.672 ************************************ 00:05:04.931 07:24:08 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:04.931 07:24:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:04.931 07:24:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.931 07:24:08 -- common/autotest_common.sh@10 -- # set +x 00:05:04.931 ************************************ 00:05:04.932 START TEST dpdk_mem_utility 00:05:04.932 ************************************ 00:05:04.932 07:24:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:04.932 * Looking for test storage... 00:05:04.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:04.932 07:24:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:04.932 07:24:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3944938 00:05:04.932 07:24:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3944938 00:05:04.932 07:24:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.932 07:24:08 -- common/autotest_common.sh@819 -- # '[' -z 3944938 ']' 00:05:04.932 07:24:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.932 07:24:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:04.932 07:24:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.932 07:24:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:04.932 07:24:08 -- common/autotest_common.sh@10 -- # set +x 00:05:04.932 [2024-10-07 07:24:08.791338] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:04.932 [2024-10-07 07:24:08.791389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3944938 ] 00:05:04.932 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.932 [2024-10-07 07:24:08.845577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.191 [2024-10-07 07:24:08.921722] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:05.191 [2024-10-07 07:24:08.921832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.759 07:24:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:05.759 07:24:09 -- common/autotest_common.sh@852 -- # return 0 00:05:05.759 07:24:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:05.759 07:24:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:05.759 07:24:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:05.759 07:24:09 -- common/autotest_common.sh@10 -- # set +x 00:05:05.760 { 00:05:05.760 "filename": "/tmp/spdk_mem_dump.txt" 00:05:05.760 } 00:05:05.760 07:24:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:05.760 07:24:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:05.760 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:05.760 1 heaps totaling size 814.000000 MiB 00:05:05.760 size: 814.000000 MiB heap id: 0 00:05:05.760 end heaps---------- 00:05:05.760 8 mempools totaling size 598.116089 MiB 00:05:05.760 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:05.760 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:05.760 size: 84.521057 MiB name: bdev_io_3944938 00:05:05.760 size: 51.011292 MiB name: evtpool_3944938 00:05:05.760 size: 50.003479 MiB name: msgpool_3944938 00:05:05.760 size: 21.763794 MiB name: PDU_Pool 00:05:05.760 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:05.760 size: 0.026123 MiB name: Session_Pool 00:05:05.760 end mempools------- 00:05:05.760 6 memzones totaling size 4.142822 MiB 00:05:05.760 size: 1.000366 MiB name: RG_ring_0_3944938 00:05:05.760 size: 1.000366 MiB name: RG_ring_1_3944938 00:05:05.760 size: 1.000366 MiB name: RG_ring_4_3944938 00:05:05.760 size: 1.000366 MiB name: RG_ring_5_3944938 00:05:05.760 size: 0.125366 MiB name: RG_ring_2_3944938 00:05:05.760 size: 0.015991 MiB name: RG_ring_3_3944938 00:05:05.760 end memzones------- 00:05:05.760 07:24:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:05.760 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:05.760 list of free elements. size: 12.519348 MiB 00:05:05.760 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:05.760 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:05.760 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:05.760 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:05.760 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:05.760 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:05.760 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:05.760 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:05.760 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:05.760 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:05.760 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:05.760 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:05.760 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:05.760 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:05.760 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:05.760 list of standard malloc elements. size: 199.218079 MiB 00:05:05.760 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:05.760 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:05.760 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:05.760 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:05.760 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:05.760 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:05.760 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:05.760 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:05.760 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:05.760 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:05.760 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:05.760 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:05.760 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:05.760 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:05.760 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:05.760 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:05.760 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:05.760 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:05.760 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:05.760 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:05.760 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:05.760 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:05.760 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:05.760 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:05.760 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:05.760 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:05.760 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:05.760 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:05.760 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:05.760 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:05.760 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:05.760 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:05.760 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:05.760 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:05.760 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:05.760 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:05.760 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:05.760 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:05.760 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:05.760 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:05.760 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:05.760 list of memzone associated elements. size: 602.262573 MiB 00:05:05.760 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:05.760 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:05.760 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:05.760 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:05.760 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:05.760 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3944938_0 00:05:05.760 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:05.760 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3944938_0 00:05:05.760 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:05.760 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3944938_0 00:05:05.760 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:05.760 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:05.760 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:05.760 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:05.760 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:05.760 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3944938 00:05:05.760 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:05.760 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3944938 00:05:05.760 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:05.760 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3944938 00:05:05.760 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:05.760 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:05.760 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:05.760 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:05.760 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:05.760 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:05.760 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:05.760 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:05.760 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:05.760 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3944938 00:05:05.760 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:05.760 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3944938 00:05:05.760 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:05.760 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3944938 00:05:05.760 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:05.760 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3944938 00:05:05.760 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:05.760 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3944938 00:05:05.760 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:05.761 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:05.761 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:05.761 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:05.761 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:05.761 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:05.761 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:05.761 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3944938 00:05:05.761 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:05.761 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:05.761 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:05.761 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:05.761 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:05.761 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3944938 00:05:05.761 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:05.761 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:05.761 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:05.761 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3944938 00:05:05.761 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:05.761 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3944938 00:05:05.761 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:05.761 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:05.761 07:24:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:05.761 07:24:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3944938 00:05:05.761 07:24:09 -- common/autotest_common.sh@926 -- # '[' -z 3944938 ']' 00:05:05.761 07:24:09 -- common/autotest_common.sh@930 -- # kill -0 3944938 00:05:05.761 07:24:09 -- common/autotest_common.sh@931 -- # uname 00:05:05.761 07:24:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:05.761 07:24:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3944938 00:05:06.020 07:24:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:06.020 07:24:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:06.020 07:24:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3944938' 00:05:06.020 killing process with pid 3944938 00:05:06.020 07:24:09 -- common/autotest_common.sh@945 -- # kill 3944938 00:05:06.020 07:24:09 -- common/autotest_common.sh@950 -- # wait 3944938 00:05:06.279 00:05:06.279 real 0m1.421s 00:05:06.279 user 0m1.532s 00:05:06.279 sys 0m0.376s 00:05:06.279 07:24:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.279 07:24:10 -- common/autotest_common.sh@10 -- # set +x 00:05:06.279 ************************************ 00:05:06.279 END TEST dpdk_mem_utility 00:05:06.279 ************************************ 00:05:06.279 07:24:10 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:06.279 07:24:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.279 07:24:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.279 07:24:10 -- common/autotest_common.sh@10 -- # set +x 00:05:06.279 ************************************ 00:05:06.279 START TEST event 00:05:06.279 ************************************ 00:05:06.279 07:24:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:06.279 * Looking for test storage... 00:05:06.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:06.279 07:24:10 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:06.279 07:24:10 -- bdev/nbd_common.sh@6 -- # set -e 00:05:06.279 07:24:10 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:06.279 07:24:10 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:06.279 07:24:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.279 07:24:10 -- common/autotest_common.sh@10 -- # set +x 00:05:06.279 ************************************ 00:05:06.279 START TEST event_perf 00:05:06.279 ************************************ 00:05:06.279 07:24:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:06.279 Running I/O for 1 seconds...[2024-10-07 07:24:10.237963] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:06.279 [2024-10-07 07:24:10.238043] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945370 ] 00:05:06.539 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.539 [2024-10-07 07:24:10.299410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.539 [2024-10-07 07:24:10.371662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.539 [2024-10-07 07:24:10.371765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.539 [2024-10-07 07:24:10.371832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.539 [2024-10-07 07:24:10.371834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.918 Running I/O for 1 seconds... 00:05:07.918 lcore 0: 211617 00:05:07.918 lcore 1: 211618 00:05:07.918 lcore 2: 211617 00:05:07.918 lcore 3: 211617 00:05:07.918 done. 00:05:07.918 00:05:07.918 real 0m1.241s 00:05:07.918 user 0m4.157s 00:05:07.918 sys 0m0.081s 00:05:07.918 07:24:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.918 07:24:11 -- common/autotest_common.sh@10 -- # set +x 00:05:07.918 ************************************ 00:05:07.918 END TEST event_perf 00:05:07.918 ************************************ 00:05:07.918 07:24:11 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:07.918 07:24:11 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:07.918 07:24:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.918 07:24:11 -- common/autotest_common.sh@10 -- # set +x 00:05:07.918 ************************************ 00:05:07.918 START TEST event_reactor 00:05:07.918 ************************************ 00:05:07.918 07:24:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:07.918 [2024-10-07 07:24:11.519998] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:07.918 [2024-10-07 07:24:11.520084] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945623 ] 00:05:07.918 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.918 [2024-10-07 07:24:11.579738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.918 [2024-10-07 07:24:11.645907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.857 test_start 00:05:08.857 oneshot 00:05:08.857 tick 100 00:05:08.857 tick 100 00:05:08.857 tick 250 00:05:08.857 tick 100 00:05:08.857 tick 100 00:05:08.857 tick 250 00:05:08.857 tick 500 00:05:08.857 tick 100 00:05:08.857 tick 100 00:05:08.857 tick 100 00:05:08.857 tick 250 00:05:08.857 tick 100 00:05:08.857 tick 100 00:05:08.857 test_end 00:05:08.857 00:05:08.857 real 0m1.237s 00:05:08.857 user 0m1.160s 00:05:08.857 sys 0m0.074s 00:05:08.857 07:24:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.857 07:24:12 -- common/autotest_common.sh@10 -- # set +x 00:05:08.857 ************************************ 00:05:08.857 END TEST event_reactor 00:05:08.857 ************************************ 00:05:08.857 07:24:12 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.857 07:24:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:08.857 07:24:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.857 07:24:12 -- common/autotest_common.sh@10 -- # set +x 00:05:08.857 ************************************ 00:05:08.857 START TEST event_reactor_perf 00:05:08.857 ************************************ 00:05:08.857 07:24:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.857 [2024-10-07 07:24:12.795991] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:08.857 [2024-10-07 07:24:12.796083] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945868 ] 00:05:08.857 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.116 [2024-10-07 07:24:12.853269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.116 [2024-10-07 07:24:12.919463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.051 test_start 00:05:10.051 test_end 00:05:10.051 Performance: 514982 events per second 00:05:10.051 00:05:10.051 real 0m1.230s 00:05:10.051 user 0m1.157s 00:05:10.051 sys 0m0.069s 00:05:10.051 07:24:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.051 07:24:14 -- common/autotest_common.sh@10 -- # set +x 00:05:10.051 ************************************ 00:05:10.051 END TEST event_reactor_perf 00:05:10.051 ************************************ 00:05:10.309 07:24:14 -- event/event.sh@49 -- # uname -s 00:05:10.309 07:24:14 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:10.309 07:24:14 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:10.309 07:24:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:10.309 07:24:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.309 07:24:14 -- common/autotest_common.sh@10 -- # set +x 00:05:10.309 ************************************ 00:05:10.309 START TEST event_scheduler 00:05:10.309 ************************************ 00:05:10.309 07:24:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:10.309 * Looking for test storage... 00:05:10.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:10.309 07:24:14 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:10.309 07:24:14 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3946135 00:05:10.309 07:24:14 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.309 07:24:14 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:10.309 07:24:14 -- scheduler/scheduler.sh@37 -- # waitforlisten 3946135 00:05:10.309 07:24:14 -- common/autotest_common.sh@819 -- # '[' -z 3946135 ']' 00:05:10.309 07:24:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.309 07:24:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:10.309 07:24:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.309 07:24:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:10.309 07:24:14 -- common/autotest_common.sh@10 -- # set +x 00:05:10.309 [2024-10-07 07:24:14.167286] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:10.310 [2024-10-07 07:24:14.167332] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3946135 ] 00:05:10.310 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.310 [2024-10-07 07:24:14.217420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:10.569 [2024-10-07 07:24:14.289566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.569 [2024-10-07 07:24:14.289655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.569 [2024-10-07 07:24:14.289740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.569 [2024-10-07 07:24:14.289741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.136 07:24:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:11.136 07:24:14 -- common/autotest_common.sh@852 -- # return 0 00:05:11.136 07:24:14 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:11.136 07:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.136 07:24:14 -- common/autotest_common.sh@10 -- # set +x 00:05:11.136 POWER: Env isn't set yet! 00:05:11.136 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:11.136 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.137 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.137 POWER: Attempting to initialise PSTAT power management... 00:05:11.137 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:11.137 POWER: Initialized successfully for lcore 0 power management 00:05:11.137 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:11.137 POWER: Initialized successfully for lcore 1 power management 00:05:11.137 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:11.137 POWER: Initialized successfully for lcore 2 power management 00:05:11.137 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:11.137 POWER: Initialized successfully for lcore 3 power management 00:05:11.137 [2024-10-07 07:24:15.027329] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:11.137 [2024-10-07 07:24:15.027346] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:11.137 [2024-10-07 07:24:15.027355] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:11.137 07:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.137 07:24:15 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:11.137 07:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.137 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.137 [2024-10-07 07:24:15.095337] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:11.137 07:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.137 07:24:15 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:11.137 07:24:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:11.137 07:24:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.137 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.137 ************************************ 00:05:11.137 START TEST scheduler_create_thread 00:05:11.137 ************************************ 00:05:11.137 07:24:15 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:11.137 07:24:15 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:11.137 07:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.137 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.397 2 00:05:11.397 07:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.397 07:24:15 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:11.397 07:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.397 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.397 3 00:05:11.397 07:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.397 07:24:15 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:11.397 07:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.397 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.397 4 00:05:11.397 07:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.397 07:24:15 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:11.397 07:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.397 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.397 5 00:05:11.397 07:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.397 07:24:15 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:11.397 07:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.397 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.397 6 00:05:11.397 07:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.397 07:24:15 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:11.397 07:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.397 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.397 7 00:05:11.397 07:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.397 07:24:15 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:11.397 07:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.397 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.397 8 00:05:11.397 07:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.397 07:24:15 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:11.397 07:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.397 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.397 9 00:05:11.397 07:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.397 07:24:15 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:11.397 07:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.397 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.397 10 00:05:11.397 07:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.397 07:24:15 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:11.397 07:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.397 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:05:11.397 07:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:11.397 07:24:15 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:11.397 07:24:15 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:11.397 07:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:11.397 07:24:15 -- common/autotest_common.sh@10 -- # set +x 00:05:12.334 07:24:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:12.334 07:24:16 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:12.334 07:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:12.334 07:24:16 -- common/autotest_common.sh@10 -- # set +x 00:05:13.710 07:24:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:13.710 07:24:17 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:13.710 07:24:17 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:13.710 07:24:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:13.710 07:24:17 -- common/autotest_common.sh@10 -- # set +x 00:05:14.647 07:24:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.647 00:05:14.647 real 0m3.383s 00:05:14.647 user 0m0.024s 00:05:14.647 sys 0m0.004s 00:05:14.647 07:24:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.647 07:24:18 -- common/autotest_common.sh@10 -- # set +x 00:05:14.647 ************************************ 00:05:14.647 END TEST scheduler_create_thread 00:05:14.647 ************************************ 00:05:14.647 07:24:18 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:14.647 07:24:18 -- scheduler/scheduler.sh@46 -- # killprocess 3946135 00:05:14.647 07:24:18 -- common/autotest_common.sh@926 -- # '[' -z 3946135 ']' 00:05:14.647 07:24:18 -- common/autotest_common.sh@930 -- # kill -0 3946135 00:05:14.647 07:24:18 -- common/autotest_common.sh@931 -- # uname 00:05:14.647 07:24:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:14.647 07:24:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3946135 00:05:14.647 07:24:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:14.647 07:24:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:14.647 07:24:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3946135' 00:05:14.647 killing process with pid 3946135 00:05:14.647 07:24:18 -- common/autotest_common.sh@945 -- # kill 3946135 00:05:14.647 07:24:18 -- common/autotest_common.sh@950 -- # wait 3946135 00:05:14.985 [2024-10-07 07:24:18.867227] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:15.263 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:15.263 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:15.263 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:15.263 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:15.263 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:15.263 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:15.263 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:15.263 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:15.263 00:05:15.263 real 0m5.068s 00:05:15.263 user 0m10.554s 00:05:15.263 sys 0m0.328s 00:05:15.263 07:24:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.263 07:24:19 -- common/autotest_common.sh@10 -- # set +x 00:05:15.263 ************************************ 00:05:15.263 END TEST event_scheduler 00:05:15.263 ************************************ 00:05:15.263 07:24:19 -- event/event.sh@51 -- # modprobe -n nbd 00:05:15.263 07:24:19 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:15.263 07:24:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.263 07:24:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.263 07:24:19 -- common/autotest_common.sh@10 -- # set +x 00:05:15.263 ************************************ 00:05:15.263 START TEST app_repeat 00:05:15.263 ************************************ 00:05:15.263 07:24:19 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:15.263 07:24:19 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.263 07:24:19 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.263 07:24:19 -- event/event.sh@13 -- # local nbd_list 00:05:15.263 07:24:19 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.263 07:24:19 -- event/event.sh@14 -- # local bdev_list 00:05:15.263 07:24:19 -- event/event.sh@15 -- # local repeat_times=4 00:05:15.263 07:24:19 -- event/event.sh@17 -- # modprobe nbd 00:05:15.263 07:24:19 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:15.263 07:24:19 -- event/event.sh@19 -- # repeat_pid=3946924 00:05:15.263 07:24:19 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.263 07:24:19 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3946924' 00:05:15.263 Process app_repeat pid: 3946924 00:05:15.263 07:24:19 -- event/event.sh@23 -- # for i in {0..2} 00:05:15.263 07:24:19 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:15.263 spdk_app_start Round 0 00:05:15.263 07:24:19 -- event/event.sh@25 -- # waitforlisten 3946924 /var/tmp/spdk-nbd.sock 00:05:15.263 07:24:19 -- common/autotest_common.sh@819 -- # '[' -z 3946924 ']' 00:05:15.263 07:24:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.263 07:24:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:15.263 07:24:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.263 07:24:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:15.263 07:24:19 -- common/autotest_common.sh@10 -- # set +x 00:05:15.263 [2024-10-07 07:24:19.183597] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:15.263 [2024-10-07 07:24:19.183644] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3946924 ] 00:05:15.263 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.523 [2024-10-07 07:24:19.239272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.523 [2024-10-07 07:24:19.315643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.523 [2024-10-07 07:24:19.315647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.091 07:24:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:16.091 07:24:20 -- common/autotest_common.sh@852 -- # return 0 00:05:16.091 07:24:20 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.350 Malloc0 00:05:16.350 07:24:20 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.608 Malloc1 00:05:16.608 07:24:20 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.608 07:24:20 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.608 07:24:20 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.608 07:24:20 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.608 07:24:20 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.608 07:24:20 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.608 07:24:20 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.608 07:24:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.608 07:24:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.608 07:24:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.608 07:24:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.608 07:24:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.608 07:24:20 -- bdev/nbd_common.sh@12 -- # local i 00:05:16.608 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.608 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.608 07:24:20 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.867 /dev/nbd0 00:05:16.867 07:24:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.867 07:24:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.867 07:24:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:16.867 07:24:20 -- common/autotest_common.sh@857 -- # local i 00:05:16.867 07:24:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:16.867 07:24:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:16.867 07:24:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:16.867 07:24:20 -- common/autotest_common.sh@861 -- # break 00:05:16.867 07:24:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:16.867 07:24:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:16.867 07:24:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.867 1+0 records in 00:05:16.867 1+0 records out 00:05:16.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192809 s, 21.2 MB/s 00:05:16.867 07:24:20 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:16.867 07:24:20 -- common/autotest_common.sh@874 -- # size=4096 00:05:16.867 07:24:20 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:16.867 07:24:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:16.867 07:24:20 -- common/autotest_common.sh@877 -- # return 0 00:05:16.867 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.868 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.868 07:24:20 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.127 /dev/nbd1 00:05:17.127 07:24:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.127 07:24:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.127 07:24:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:17.127 07:24:20 -- common/autotest_common.sh@857 -- # local i 00:05:17.127 07:24:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:17.127 07:24:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:17.127 07:24:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:17.127 07:24:20 -- common/autotest_common.sh@861 -- # break 00:05:17.127 07:24:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:17.127 07:24:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:17.127 07:24:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.127 1+0 records in 00:05:17.127 1+0 records out 00:05:17.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183978 s, 22.3 MB/s 00:05:17.127 07:24:20 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.127 07:24:20 -- common/autotest_common.sh@874 -- # size=4096 00:05:17.127 07:24:20 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.127 07:24:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:17.127 07:24:20 -- common/autotest_common.sh@877 -- # return 0 00:05:17.127 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.127 07:24:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.127 07:24:20 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.127 07:24:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.127 07:24:20 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.127 { 00:05:17.127 "nbd_device": "/dev/nbd0", 00:05:17.127 "bdev_name": "Malloc0" 00:05:17.127 }, 00:05:17.127 { 00:05:17.127 "nbd_device": "/dev/nbd1", 00:05:17.127 "bdev_name": "Malloc1" 00:05:17.127 } 00:05:17.127 ]' 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.127 { 00:05:17.127 "nbd_device": "/dev/nbd0", 00:05:17.127 "bdev_name": "Malloc0" 00:05:17.127 }, 00:05:17.127 { 00:05:17.127 "nbd_device": "/dev/nbd1", 00:05:17.127 "bdev_name": "Malloc1" 00:05:17.127 } 00:05:17.127 ]' 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.127 /dev/nbd1' 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.127 /dev/nbd1' 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.127 07:24:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.387 256+0 records in 00:05:17.387 256+0 records out 00:05:17.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00323043 s, 325 MB/s 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.387 256+0 records in 00:05:17.387 256+0 records out 00:05:17.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013481 s, 77.8 MB/s 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.387 256+0 records in 00:05:17.387 256+0 records out 00:05:17.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145147 s, 72.2 MB/s 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@51 -- # local i 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@41 -- # break 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.387 07:24:21 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.646 07:24:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.646 07:24:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.646 07:24:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.646 07:24:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.646 07:24:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.646 07:24:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.646 07:24:21 -- bdev/nbd_common.sh@41 -- # break 00:05:17.646 07:24:21 -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.646 07:24:21 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.646 07:24:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.646 07:24:21 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.905 07:24:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.905 07:24:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.905 07:24:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.905 07:24:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.905 07:24:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.905 07:24:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.905 07:24:21 -- bdev/nbd_common.sh@65 -- # true 00:05:17.905 07:24:21 -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.905 07:24:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.905 07:24:21 -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.905 07:24:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.906 07:24:21 -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.906 07:24:21 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.165 07:24:21 -- event/event.sh@35 -- # sleep 3 00:05:18.425 [2024-10-07 07:24:22.183133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.425 [2024-10-07 07:24:22.245280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.425 [2024-10-07 07:24:22.245284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.425 [2024-10-07 07:24:22.285924] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.425 [2024-10-07 07:24:22.285964] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.716 07:24:24 -- event/event.sh@23 -- # for i in {0..2} 00:05:21.716 07:24:24 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:21.716 spdk_app_start Round 1 00:05:21.716 07:24:24 -- event/event.sh@25 -- # waitforlisten 3946924 /var/tmp/spdk-nbd.sock 00:05:21.716 07:24:24 -- common/autotest_common.sh@819 -- # '[' -z 3946924 ']' 00:05:21.716 07:24:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.716 07:24:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:21.716 07:24:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.716 07:24:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:21.716 07:24:24 -- common/autotest_common.sh@10 -- # set +x 00:05:21.716 07:24:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:21.716 07:24:25 -- common/autotest_common.sh@852 -- # return 0 00:05:21.716 07:24:25 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.716 Malloc0 00:05:21.716 07:24:25 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.716 Malloc1 00:05:21.716 07:24:25 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.716 07:24:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.716 07:24:25 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.716 07:24:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.716 07:24:25 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.716 07:24:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.716 07:24:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.716 07:24:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.716 07:24:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.716 07:24:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.716 07:24:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.716 07:24:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.716 07:24:25 -- bdev/nbd_common.sh@12 -- # local i 00:05:21.716 07:24:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.716 07:24:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.716 07:24:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.716 /dev/nbd0 00:05:21.975 07:24:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.975 07:24:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.975 07:24:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:21.975 07:24:25 -- common/autotest_common.sh@857 -- # local i 00:05:21.975 07:24:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:21.975 07:24:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:21.975 07:24:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:21.975 07:24:25 -- common/autotest_common.sh@861 -- # break 00:05:21.975 07:24:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:21.975 07:24:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:21.975 07:24:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.975 1+0 records in 00:05:21.975 1+0 records out 00:05:21.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186114 s, 22.0 MB/s 00:05:21.975 07:24:25 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.975 07:24:25 -- common/autotest_common.sh@874 -- # size=4096 00:05:21.975 07:24:25 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.975 07:24:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:21.975 07:24:25 -- common/autotest_common.sh@877 -- # return 0 00:05:21.975 07:24:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.975 07:24:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.975 07:24:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.975 /dev/nbd1 00:05:21.975 07:24:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.975 07:24:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.975 07:24:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:21.975 07:24:25 -- common/autotest_common.sh@857 -- # local i 00:05:21.975 07:24:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:21.975 07:24:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:21.975 07:24:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:21.975 07:24:25 -- common/autotest_common.sh@861 -- # break 00:05:21.975 07:24:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:21.975 07:24:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:21.975 07:24:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.975 1+0 records in 00:05:21.975 1+0 records out 00:05:21.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227905 s, 18.0 MB/s 00:05:21.975 07:24:25 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.975 07:24:25 -- common/autotest_common.sh@874 -- # size=4096 00:05:21.975 07:24:25 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.975 07:24:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:21.975 07:24:25 -- common/autotest_common.sh@877 -- # return 0 00:05:21.975 07:24:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.975 07:24:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.975 07:24:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.975 07:24:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.975 07:24:25 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.233 { 00:05:22.233 "nbd_device": "/dev/nbd0", 00:05:22.233 "bdev_name": "Malloc0" 00:05:22.233 }, 00:05:22.233 { 00:05:22.233 "nbd_device": "/dev/nbd1", 00:05:22.233 "bdev_name": "Malloc1" 00:05:22.233 } 00:05:22.233 ]' 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.233 { 00:05:22.233 "nbd_device": "/dev/nbd0", 00:05:22.233 "bdev_name": "Malloc0" 00:05:22.233 }, 00:05:22.233 { 00:05:22.233 "nbd_device": "/dev/nbd1", 00:05:22.233 "bdev_name": "Malloc1" 00:05:22.233 } 00:05:22.233 ]' 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.233 /dev/nbd1' 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.233 /dev/nbd1' 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.233 256+0 records in 00:05:22.233 256+0 records out 00:05:22.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108591 s, 96.6 MB/s 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.233 256+0 records in 00:05:22.233 256+0 records out 00:05:22.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135624 s, 77.3 MB/s 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.233 07:24:26 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.493 256+0 records in 00:05:22.493 256+0 records out 00:05:22.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147013 s, 71.3 MB/s 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@51 -- # local i 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@41 -- # break 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.493 07:24:26 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.752 07:24:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.752 07:24:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.752 07:24:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.752 07:24:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.752 07:24:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.752 07:24:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.752 07:24:26 -- bdev/nbd_common.sh@41 -- # break 00:05:22.752 07:24:26 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.752 07:24:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.752 07:24:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.752 07:24:26 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.012 07:24:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.012 07:24:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.012 07:24:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.012 07:24:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.012 07:24:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.012 07:24:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.012 07:24:26 -- bdev/nbd_common.sh@65 -- # true 00:05:23.012 07:24:26 -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.012 07:24:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.012 07:24:26 -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.012 07:24:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.012 07:24:26 -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.012 07:24:26 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.271 07:24:27 -- event/event.sh@35 -- # sleep 3 00:05:23.530 [2024-10-07 07:24:27.257586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.530 [2024-10-07 07:24:27.321428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.530 [2024-10-07 07:24:27.321431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.530 [2024-10-07 07:24:27.362427] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.530 [2024-10-07 07:24:27.362466] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.822 07:24:30 -- event/event.sh@23 -- # for i in {0..2} 00:05:26.822 07:24:30 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:26.822 spdk_app_start Round 2 00:05:26.822 07:24:30 -- event/event.sh@25 -- # waitforlisten 3946924 /var/tmp/spdk-nbd.sock 00:05:26.822 07:24:30 -- common/autotest_common.sh@819 -- # '[' -z 3946924 ']' 00:05:26.822 07:24:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.822 07:24:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:26.822 07:24:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.822 07:24:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:26.822 07:24:30 -- common/autotest_common.sh@10 -- # set +x 00:05:26.822 07:24:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.822 07:24:30 -- common/autotest_common.sh@852 -- # return 0 00:05:26.822 07:24:30 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.822 Malloc0 00:05:26.822 07:24:30 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.822 Malloc1 00:05:26.822 07:24:30 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.822 07:24:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.822 07:24:30 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.822 07:24:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.822 07:24:30 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.822 07:24:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.822 07:24:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.822 07:24:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.822 07:24:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.822 07:24:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.822 07:24:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.822 07:24:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.822 07:24:30 -- bdev/nbd_common.sh@12 -- # local i 00:05:26.822 07:24:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.822 07:24:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.822 07:24:30 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.822 /dev/nbd0 00:05:27.082 07:24:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.082 07:24:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.082 07:24:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:27.082 07:24:30 -- common/autotest_common.sh@857 -- # local i 00:05:27.082 07:24:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:27.082 07:24:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:27.082 07:24:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:27.082 07:24:30 -- common/autotest_common.sh@861 -- # break 00:05:27.082 07:24:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:27.082 07:24:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:27.082 07:24:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.082 1+0 records in 00:05:27.082 1+0 records out 00:05:27.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208866 s, 19.6 MB/s 00:05:27.082 07:24:30 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.082 07:24:30 -- common/autotest_common.sh@874 -- # size=4096 00:05:27.082 07:24:30 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.082 07:24:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:27.082 07:24:30 -- common/autotest_common.sh@877 -- # return 0 00:05:27.082 07:24:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.082 07:24:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.082 07:24:30 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.082 /dev/nbd1 00:05:27.082 07:24:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.082 07:24:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.082 07:24:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:27.082 07:24:31 -- common/autotest_common.sh@857 -- # local i 00:05:27.082 07:24:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:27.082 07:24:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:27.082 07:24:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:27.082 07:24:31 -- common/autotest_common.sh@861 -- # break 00:05:27.082 07:24:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:27.082 07:24:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:27.082 07:24:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.082 1+0 records in 00:05:27.082 1+0 records out 00:05:27.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019052 s, 21.5 MB/s 00:05:27.082 07:24:31 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.082 07:24:31 -- common/autotest_common.sh@874 -- # size=4096 00:05:27.082 07:24:31 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.082 07:24:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:27.082 07:24:31 -- common/autotest_common.sh@877 -- # return 0 00:05:27.082 07:24:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.082 07:24:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.082 07:24:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.082 07:24:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.082 07:24:31 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.340 07:24:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.340 { 00:05:27.340 "nbd_device": "/dev/nbd0", 00:05:27.340 "bdev_name": "Malloc0" 00:05:27.340 }, 00:05:27.340 { 00:05:27.340 "nbd_device": "/dev/nbd1", 00:05:27.340 "bdev_name": "Malloc1" 00:05:27.340 } 00:05:27.340 ]' 00:05:27.340 07:24:31 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.340 { 00:05:27.340 "nbd_device": "/dev/nbd0", 00:05:27.340 "bdev_name": "Malloc0" 00:05:27.340 }, 00:05:27.340 { 00:05:27.340 "nbd_device": "/dev/nbd1", 00:05:27.340 "bdev_name": "Malloc1" 00:05:27.340 } 00:05:27.340 ]' 00:05:27.340 07:24:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.340 07:24:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.340 /dev/nbd1' 00:05:27.340 07:24:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.341 /dev/nbd1' 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.341 256+0 records in 00:05:27.341 256+0 records out 00:05:27.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109547 s, 95.7 MB/s 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.341 256+0 records in 00:05:27.341 256+0 records out 00:05:27.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134795 s, 77.8 MB/s 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.341 256+0 records in 00:05:27.341 256+0 records out 00:05:27.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144005 s, 72.8 MB/s 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@51 -- # local i 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.341 07:24:31 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.599 07:24:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.599 07:24:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.599 07:24:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.599 07:24:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.599 07:24:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.599 07:24:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.599 07:24:31 -- bdev/nbd_common.sh@41 -- # break 00:05:27.599 07:24:31 -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.599 07:24:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.599 07:24:31 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.858 07:24:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.858 07:24:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.858 07:24:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.858 07:24:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.858 07:24:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.858 07:24:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.858 07:24:31 -- bdev/nbd_common.sh@41 -- # break 00:05:27.858 07:24:31 -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.858 07:24:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.858 07:24:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.858 07:24:31 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.118 07:24:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.118 07:24:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.118 07:24:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.118 07:24:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.118 07:24:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.118 07:24:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.118 07:24:31 -- bdev/nbd_common.sh@65 -- # true 00:05:28.118 07:24:31 -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.118 07:24:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.118 07:24:31 -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.118 07:24:31 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.118 07:24:31 -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.118 07:24:31 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.377 07:24:32 -- event/event.sh@35 -- # sleep 3 00:05:28.377 [2024-10-07 07:24:32.334705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.636 [2024-10-07 07:24:32.397571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.636 [2024-10-07 07:24:32.397583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.636 [2024-10-07 07:24:32.438952] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.636 [2024-10-07 07:24:32.438992] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.171 07:24:35 -- event/event.sh@38 -- # waitforlisten 3946924 /var/tmp/spdk-nbd.sock 00:05:31.171 07:24:35 -- common/autotest_common.sh@819 -- # '[' -z 3946924 ']' 00:05:31.171 07:24:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.171 07:24:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.171 07:24:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.171 07:24:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.171 07:24:35 -- common/autotest_common.sh@10 -- # set +x 00:05:31.430 07:24:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:31.430 07:24:35 -- common/autotest_common.sh@852 -- # return 0 00:05:31.430 07:24:35 -- event/event.sh@39 -- # killprocess 3946924 00:05:31.430 07:24:35 -- common/autotest_common.sh@926 -- # '[' -z 3946924 ']' 00:05:31.430 07:24:35 -- common/autotest_common.sh@930 -- # kill -0 3946924 00:05:31.430 07:24:35 -- common/autotest_common.sh@931 -- # uname 00:05:31.430 07:24:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:31.430 07:24:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3946924 00:05:31.430 07:24:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:31.430 07:24:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:31.430 07:24:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3946924' 00:05:31.430 killing process with pid 3946924 00:05:31.430 07:24:35 -- common/autotest_common.sh@945 -- # kill 3946924 00:05:31.430 07:24:35 -- common/autotest_common.sh@950 -- # wait 3946924 00:05:31.689 spdk_app_start is called in Round 0. 00:05:31.689 Shutdown signal received, stop current app iteration 00:05:31.689 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:05:31.689 spdk_app_start is called in Round 1. 00:05:31.689 Shutdown signal received, stop current app iteration 00:05:31.689 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:05:31.689 spdk_app_start is called in Round 2. 00:05:31.689 Shutdown signal received, stop current app iteration 00:05:31.689 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:05:31.689 spdk_app_start is called in Round 3. 00:05:31.689 Shutdown signal received, stop current app iteration 00:05:31.689 07:24:35 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:31.689 07:24:35 -- event/event.sh@42 -- # return 0 00:05:31.689 00:05:31.689 real 0m16.387s 00:05:31.689 user 0m35.482s 00:05:31.689 sys 0m2.446s 00:05:31.689 07:24:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.689 07:24:35 -- common/autotest_common.sh@10 -- # set +x 00:05:31.689 ************************************ 00:05:31.689 END TEST app_repeat 00:05:31.689 ************************************ 00:05:31.689 07:24:35 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:31.689 07:24:35 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:31.689 07:24:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.689 07:24:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.689 07:24:35 -- common/autotest_common.sh@10 -- # set +x 00:05:31.689 ************************************ 00:05:31.689 START TEST cpu_locks 00:05:31.689 ************************************ 00:05:31.689 07:24:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:31.949 * Looking for test storage... 00:05:31.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:31.949 07:24:35 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:31.949 07:24:35 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:31.949 07:24:35 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:31.949 07:24:35 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:31.949 07:24:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.949 07:24:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.949 07:24:35 -- common/autotest_common.sh@10 -- # set +x 00:05:31.949 ************************************ 00:05:31.949 START TEST default_locks 00:05:31.949 ************************************ 00:05:31.949 07:24:35 -- common/autotest_common.sh@1104 -- # default_locks 00:05:31.949 07:24:35 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3950037 00:05:31.949 07:24:35 -- event/cpu_locks.sh@47 -- # waitforlisten 3950037 00:05:31.949 07:24:35 -- common/autotest_common.sh@819 -- # '[' -z 3950037 ']' 00:05:31.949 07:24:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.949 07:24:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.949 07:24:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.949 07:24:35 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.949 07:24:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.949 07:24:35 -- common/autotest_common.sh@10 -- # set +x 00:05:31.949 [2024-10-07 07:24:35.726412] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:31.949 [2024-10-07 07:24:35.726460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950037 ] 00:05:31.949 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.949 [2024-10-07 07:24:35.778997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.949 [2024-10-07 07:24:35.853096] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.949 [2024-10-07 07:24:35.853211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.043 07:24:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:33.043 07:24:36 -- common/autotest_common.sh@852 -- # return 0 00:05:33.043 07:24:36 -- event/cpu_locks.sh@49 -- # locks_exist 3950037 00:05:33.043 07:24:36 -- event/cpu_locks.sh@22 -- # lslocks -p 3950037 00:05:33.043 07:24:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.043 lslocks: write error 00:05:33.043 07:24:36 -- event/cpu_locks.sh@50 -- # killprocess 3950037 00:05:33.043 07:24:36 -- common/autotest_common.sh@926 -- # '[' -z 3950037 ']' 00:05:33.043 07:24:36 -- common/autotest_common.sh@930 -- # kill -0 3950037 00:05:33.043 07:24:36 -- common/autotest_common.sh@931 -- # uname 00:05:33.043 07:24:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:33.043 07:24:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3950037 00:05:33.303 07:24:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:33.303 07:24:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:33.303 07:24:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3950037' 00:05:33.303 killing process with pid 3950037 00:05:33.303 07:24:36 -- common/autotest_common.sh@945 -- # kill 3950037 00:05:33.303 07:24:36 -- common/autotest_common.sh@950 -- # wait 3950037 00:05:33.562 07:24:37 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3950037 00:05:33.562 07:24:37 -- common/autotest_common.sh@640 -- # local es=0 00:05:33.562 07:24:37 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3950037 00:05:33.562 07:24:37 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:33.562 07:24:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:33.562 07:24:37 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:33.562 07:24:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:33.562 07:24:37 -- common/autotest_common.sh@643 -- # waitforlisten 3950037 00:05:33.562 07:24:37 -- common/autotest_common.sh@819 -- # '[' -z 3950037 ']' 00:05:33.562 07:24:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.562 07:24:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:33.562 07:24:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.562 07:24:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:33.562 07:24:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3950037) - No such process 00:05:33.562 ERROR: process (pid: 3950037) is no longer running 00:05:33.562 07:24:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:33.562 07:24:37 -- common/autotest_common.sh@852 -- # return 1 00:05:33.562 07:24:37 -- common/autotest_common.sh@643 -- # es=1 00:05:33.562 07:24:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:33.562 07:24:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:33.562 07:24:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:33.562 07:24:37 -- event/cpu_locks.sh@54 -- # no_locks 00:05:33.562 07:24:37 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:33.562 07:24:37 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:33.562 07:24:37 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:33.562 00:05:33.562 real 0m1.643s 00:05:33.562 user 0m1.752s 00:05:33.562 sys 0m0.499s 00:05:33.562 07:24:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.562 07:24:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.562 ************************************ 00:05:33.562 END TEST default_locks 00:05:33.562 ************************************ 00:05:33.562 07:24:37 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:33.562 07:24:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:33.562 07:24:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.562 07:24:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.562 ************************************ 00:05:33.562 START TEST default_locks_via_rpc 00:05:33.562 ************************************ 00:05:33.562 07:24:37 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:33.562 07:24:37 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3950314 00:05:33.562 07:24:37 -- event/cpu_locks.sh@63 -- # waitforlisten 3950314 00:05:33.562 07:24:37 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.562 07:24:37 -- common/autotest_common.sh@819 -- # '[' -z 3950314 ']' 00:05:33.562 07:24:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.562 07:24:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:33.562 07:24:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.562 07:24:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:33.562 07:24:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.562 [2024-10-07 07:24:37.410833] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:33.562 [2024-10-07 07:24:37.410884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950314 ] 00:05:33.562 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.563 [2024-10-07 07:24:37.465700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.822 [2024-10-07 07:24:37.540637] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.822 [2024-10-07 07:24:37.540757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.392 07:24:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:34.392 07:24:38 -- common/autotest_common.sh@852 -- # return 0 00:05:34.392 07:24:38 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:34.392 07:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:34.392 07:24:38 -- common/autotest_common.sh@10 -- # set +x 00:05:34.392 07:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:34.392 07:24:38 -- event/cpu_locks.sh@67 -- # no_locks 00:05:34.392 07:24:38 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.392 07:24:38 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.392 07:24:38 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.392 07:24:38 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:34.392 07:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:34.392 07:24:38 -- common/autotest_common.sh@10 -- # set +x 00:05:34.392 07:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:34.392 07:24:38 -- event/cpu_locks.sh@71 -- # locks_exist 3950314 00:05:34.392 07:24:38 -- event/cpu_locks.sh@22 -- # lslocks -p 3950314 00:05:34.392 07:24:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.651 07:24:38 -- event/cpu_locks.sh@73 -- # killprocess 3950314 00:05:34.651 07:24:38 -- common/autotest_common.sh@926 -- # '[' -z 3950314 ']' 00:05:34.651 07:24:38 -- common/autotest_common.sh@930 -- # kill -0 3950314 00:05:34.651 07:24:38 -- common/autotest_common.sh@931 -- # uname 00:05:34.651 07:24:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:34.651 07:24:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3950314 00:05:34.651 07:24:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:34.651 07:24:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:34.651 07:24:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3950314' 00:05:34.651 killing process with pid 3950314 00:05:34.651 07:24:38 -- common/autotest_common.sh@945 -- # kill 3950314 00:05:34.651 07:24:38 -- common/autotest_common.sh@950 -- # wait 3950314 00:05:34.909 00:05:34.909 real 0m1.457s 00:05:34.909 user 0m1.554s 00:05:34.909 sys 0m0.445s 00:05:34.909 07:24:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.909 07:24:38 -- common/autotest_common.sh@10 -- # set +x 00:05:34.909 ************************************ 00:05:34.909 END TEST default_locks_via_rpc 00:05:34.909 ************************************ 00:05:34.909 07:24:38 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:34.909 07:24:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:34.909 07:24:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.909 07:24:38 -- common/autotest_common.sh@10 -- # set +x 00:05:34.909 ************************************ 00:05:34.909 START TEST non_locking_app_on_locked_coremask 00:05:34.909 ************************************ 00:05:34.909 07:24:38 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:34.909 07:24:38 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3950573 00:05:34.910 07:24:38 -- event/cpu_locks.sh@81 -- # waitforlisten 3950573 /var/tmp/spdk.sock 00:05:34.910 07:24:38 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.910 07:24:38 -- common/autotest_common.sh@819 -- # '[' -z 3950573 ']' 00:05:34.910 07:24:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.910 07:24:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:34.910 07:24:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.910 07:24:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:34.910 07:24:38 -- common/autotest_common.sh@10 -- # set +x 00:05:35.169 [2024-10-07 07:24:38.905772] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:35.169 [2024-10-07 07:24:38.905819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950573 ] 00:05:35.169 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.169 [2024-10-07 07:24:38.960005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.169 [2024-10-07 07:24:39.034488] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.169 [2024-10-07 07:24:39.034602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.107 07:24:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:36.107 07:24:39 -- common/autotest_common.sh@852 -- # return 0 00:05:36.107 07:24:39 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:36.107 07:24:39 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3950749 00:05:36.107 07:24:39 -- event/cpu_locks.sh@85 -- # waitforlisten 3950749 /var/tmp/spdk2.sock 00:05:36.107 07:24:39 -- common/autotest_common.sh@819 -- # '[' -z 3950749 ']' 00:05:36.107 07:24:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.107 07:24:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:36.107 07:24:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.107 07:24:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:36.107 07:24:39 -- common/autotest_common.sh@10 -- # set +x 00:05:36.107 [2024-10-07 07:24:39.739745] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:36.107 [2024-10-07 07:24:39.739790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950749 ] 00:05:36.107 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.107 [2024-10-07 07:24:39.813791] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.107 [2024-10-07 07:24:39.813820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.107 [2024-10-07 07:24:39.958744] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.107 [2024-10-07 07:24:39.958855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.674 07:24:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:36.674 07:24:40 -- common/autotest_common.sh@852 -- # return 0 00:05:36.674 07:24:40 -- event/cpu_locks.sh@87 -- # locks_exist 3950573 00:05:36.674 07:24:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.674 07:24:40 -- event/cpu_locks.sh@22 -- # lslocks -p 3950573 00:05:37.243 lslocks: write error 00:05:37.243 07:24:41 -- event/cpu_locks.sh@89 -- # killprocess 3950573 00:05:37.243 07:24:41 -- common/autotest_common.sh@926 -- # '[' -z 3950573 ']' 00:05:37.243 07:24:41 -- common/autotest_common.sh@930 -- # kill -0 3950573 00:05:37.243 07:24:41 -- common/autotest_common.sh@931 -- # uname 00:05:37.243 07:24:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:37.243 07:24:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3950573 00:05:37.243 07:24:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:37.243 07:24:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:37.243 07:24:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3950573' 00:05:37.243 killing process with pid 3950573 00:05:37.243 07:24:41 -- common/autotest_common.sh@945 -- # kill 3950573 00:05:37.243 07:24:41 -- common/autotest_common.sh@950 -- # wait 3950573 00:05:37.811 07:24:41 -- event/cpu_locks.sh@90 -- # killprocess 3950749 00:05:37.811 07:24:41 -- common/autotest_common.sh@926 -- # '[' -z 3950749 ']' 00:05:37.811 07:24:41 -- common/autotest_common.sh@930 -- # kill -0 3950749 00:05:37.811 07:24:41 -- common/autotest_common.sh@931 -- # uname 00:05:37.811 07:24:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:37.811 07:24:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3950749 00:05:38.069 07:24:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:38.069 07:24:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:38.069 07:24:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3950749' 00:05:38.069 killing process with pid 3950749 00:05:38.069 07:24:41 -- common/autotest_common.sh@945 -- # kill 3950749 00:05:38.069 07:24:41 -- common/autotest_common.sh@950 -- # wait 3950749 00:05:38.328 00:05:38.328 real 0m3.256s 00:05:38.328 user 0m3.510s 00:05:38.328 sys 0m0.899s 00:05:38.328 07:24:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.328 07:24:42 -- common/autotest_common.sh@10 -- # set +x 00:05:38.328 ************************************ 00:05:38.328 END TEST non_locking_app_on_locked_coremask 00:05:38.328 ************************************ 00:05:38.328 07:24:42 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:38.328 07:24:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.328 07:24:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.328 07:24:42 -- common/autotest_common.sh@10 -- # set +x 00:05:38.328 ************************************ 00:05:38.328 START TEST locking_app_on_unlocked_coremask 00:05:38.328 ************************************ 00:05:38.328 07:24:42 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:38.328 07:24:42 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3951079 00:05:38.328 07:24:42 -- event/cpu_locks.sh@99 -- # waitforlisten 3951079 /var/tmp/spdk.sock 00:05:38.328 07:24:42 -- common/autotest_common.sh@819 -- # '[' -z 3951079 ']' 00:05:38.328 07:24:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.328 07:24:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:38.328 07:24:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.328 07:24:42 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:38.328 07:24:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:38.328 07:24:42 -- common/autotest_common.sh@10 -- # set +x 00:05:38.328 [2024-10-07 07:24:42.201856] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:38.328 [2024-10-07 07:24:42.201908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951079 ] 00:05:38.328 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.328 [2024-10-07 07:24:42.255866] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.328 [2024-10-07 07:24:42.255894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.586 [2024-10-07 07:24:42.332324] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.586 [2024-10-07 07:24:42.332439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.154 07:24:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:39.154 07:24:43 -- common/autotest_common.sh@852 -- # return 0 00:05:39.154 07:24:43 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3951300 00:05:39.154 07:24:43 -- event/cpu_locks.sh@103 -- # waitforlisten 3951300 /var/tmp/spdk2.sock 00:05:39.154 07:24:43 -- common/autotest_common.sh@819 -- # '[' -z 3951300 ']' 00:05:39.154 07:24:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.154 07:24:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:39.154 07:24:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.154 07:24:43 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:39.154 07:24:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:39.154 07:24:43 -- common/autotest_common.sh@10 -- # set +x 00:05:39.154 [2024-10-07 07:24:43.055999] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:39.154 [2024-10-07 07:24:43.056045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951300 ] 00:05:39.154 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.414 [2024-10-07 07:24:43.125904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.414 [2024-10-07 07:24:43.262026] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.414 [2024-10-07 07:24:43.266146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.983 07:24:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:39.983 07:24:43 -- common/autotest_common.sh@852 -- # return 0 00:05:39.983 07:24:43 -- event/cpu_locks.sh@105 -- # locks_exist 3951300 00:05:39.983 07:24:43 -- event/cpu_locks.sh@22 -- # lslocks -p 3951300 00:05:39.983 07:24:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.917 lslocks: write error 00:05:40.917 07:24:44 -- event/cpu_locks.sh@107 -- # killprocess 3951079 00:05:40.917 07:24:44 -- common/autotest_common.sh@926 -- # '[' -z 3951079 ']' 00:05:40.917 07:24:44 -- common/autotest_common.sh@930 -- # kill -0 3951079 00:05:40.917 07:24:44 -- common/autotest_common.sh@931 -- # uname 00:05:40.917 07:24:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:40.917 07:24:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3951079 00:05:40.917 07:24:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:40.917 07:24:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:40.917 07:24:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3951079' 00:05:40.917 killing process with pid 3951079 00:05:40.917 07:24:44 -- common/autotest_common.sh@945 -- # kill 3951079 00:05:40.917 07:24:44 -- common/autotest_common.sh@950 -- # wait 3951079 00:05:41.855 07:24:45 -- event/cpu_locks.sh@108 -- # killprocess 3951300 00:05:41.855 07:24:45 -- common/autotest_common.sh@926 -- # '[' -z 3951300 ']' 00:05:41.855 07:24:45 -- common/autotest_common.sh@930 -- # kill -0 3951300 00:05:41.855 07:24:45 -- common/autotest_common.sh@931 -- # uname 00:05:41.855 07:24:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:41.855 07:24:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3951300 00:05:41.855 07:24:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:41.855 07:24:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:41.855 07:24:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3951300' 00:05:41.855 killing process with pid 3951300 00:05:41.855 07:24:45 -- common/autotest_common.sh@945 -- # kill 3951300 00:05:41.855 07:24:45 -- common/autotest_common.sh@950 -- # wait 3951300 00:05:42.114 00:05:42.114 real 0m3.722s 00:05:42.114 user 0m4.015s 00:05:42.114 sys 0m1.095s 00:05:42.114 07:24:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.114 07:24:45 -- common/autotest_common.sh@10 -- # set +x 00:05:42.114 ************************************ 00:05:42.114 END TEST locking_app_on_unlocked_coremask 00:05:42.114 ************************************ 00:05:42.114 07:24:45 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:42.114 07:24:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.114 07:24:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.114 07:24:45 -- common/autotest_common.sh@10 -- # set +x 00:05:42.114 ************************************ 00:05:42.114 START TEST locking_app_on_locked_coremask 00:05:42.114 ************************************ 00:05:42.114 07:24:45 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:42.114 07:24:45 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3951788 00:05:42.114 07:24:45 -- event/cpu_locks.sh@116 -- # waitforlisten 3951788 /var/tmp/spdk.sock 00:05:42.114 07:24:45 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.114 07:24:45 -- common/autotest_common.sh@819 -- # '[' -z 3951788 ']' 00:05:42.114 07:24:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.114 07:24:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:42.114 07:24:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.114 07:24:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:42.114 07:24:45 -- common/autotest_common.sh@10 -- # set +x 00:05:42.114 [2024-10-07 07:24:45.967079] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:42.114 [2024-10-07 07:24:45.967130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951788 ] 00:05:42.114 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.114 [2024-10-07 07:24:46.023148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.373 [2024-10-07 07:24:46.089642] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.373 [2024-10-07 07:24:46.089758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.941 07:24:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:42.941 07:24:46 -- common/autotest_common.sh@852 -- # return 0 00:05:42.941 07:24:46 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3952016 00:05:42.941 07:24:46 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:42.941 07:24:46 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3952016 /var/tmp/spdk2.sock 00:05:42.941 07:24:46 -- common/autotest_common.sh@640 -- # local es=0 00:05:42.941 07:24:46 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3952016 /var/tmp/spdk2.sock 00:05:42.941 07:24:46 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:42.941 07:24:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:42.941 07:24:46 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:42.941 07:24:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:42.941 07:24:46 -- common/autotest_common.sh@643 -- # waitforlisten 3952016 /var/tmp/spdk2.sock 00:05:42.941 07:24:46 -- common/autotest_common.sh@819 -- # '[' -z 3952016 ']' 00:05:42.941 07:24:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.941 07:24:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:42.941 07:24:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.941 07:24:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:42.941 07:24:46 -- common/autotest_common.sh@10 -- # set +x 00:05:42.941 [2024-10-07 07:24:46.816776] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:42.941 [2024-10-07 07:24:46.816821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952016 ] 00:05:42.941 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.941 [2024-10-07 07:24:46.890704] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3951788 has claimed it. 00:05:42.941 [2024-10-07 07:24:46.890741] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:43.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3952016) - No such process 00:05:43.509 ERROR: process (pid: 3952016) is no longer running 00:05:43.509 07:24:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.509 07:24:47 -- common/autotest_common.sh@852 -- # return 1 00:05:43.509 07:24:47 -- common/autotest_common.sh@643 -- # es=1 00:05:43.509 07:24:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:43.509 07:24:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:43.509 07:24:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:43.509 07:24:47 -- event/cpu_locks.sh@122 -- # locks_exist 3951788 00:05:43.509 07:24:47 -- event/cpu_locks.sh@22 -- # lslocks -p 3951788 00:05:43.509 07:24:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.080 lslocks: write error 00:05:44.080 07:24:47 -- event/cpu_locks.sh@124 -- # killprocess 3951788 00:05:44.080 07:24:47 -- common/autotest_common.sh@926 -- # '[' -z 3951788 ']' 00:05:44.080 07:24:47 -- common/autotest_common.sh@930 -- # kill -0 3951788 00:05:44.080 07:24:47 -- common/autotest_common.sh@931 -- # uname 00:05:44.080 07:24:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:44.080 07:24:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3951788 00:05:44.080 07:24:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:44.080 07:24:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:44.080 07:24:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3951788' 00:05:44.080 killing process with pid 3951788 00:05:44.080 07:24:47 -- common/autotest_common.sh@945 -- # kill 3951788 00:05:44.080 07:24:47 -- common/autotest_common.sh@950 -- # wait 3951788 00:05:44.339 00:05:44.339 real 0m2.329s 00:05:44.339 user 0m2.584s 00:05:44.339 sys 0m0.626s 00:05:44.339 07:24:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.339 07:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:44.339 ************************************ 00:05:44.339 END TEST locking_app_on_locked_coremask 00:05:44.339 ************************************ 00:05:44.339 07:24:48 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:44.339 07:24:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.339 07:24:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.339 07:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:44.339 ************************************ 00:05:44.339 START TEST locking_overlapped_coremask 00:05:44.339 ************************************ 00:05:44.339 07:24:48 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:44.339 07:24:48 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3952277 00:05:44.339 07:24:48 -- event/cpu_locks.sh@133 -- # waitforlisten 3952277 /var/tmp/spdk.sock 00:05:44.339 07:24:48 -- common/autotest_common.sh@819 -- # '[' -z 3952277 ']' 00:05:44.339 07:24:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.339 07:24:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:44.339 07:24:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.339 07:24:48 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:44.339 07:24:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:44.339 07:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:44.596 [2024-10-07 07:24:48.331798] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:44.596 [2024-10-07 07:24:48.331847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952277 ] 00:05:44.596 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.596 [2024-10-07 07:24:48.386408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.596 [2024-10-07 07:24:48.463291] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.596 [2024-10-07 07:24:48.463431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.596 [2024-10-07 07:24:48.463527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.596 [2024-10-07 07:24:48.463530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.533 07:24:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:45.533 07:24:49 -- common/autotest_common.sh@852 -- # return 0 00:05:45.533 07:24:49 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3952402 00:05:45.533 07:24:49 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3952402 /var/tmp/spdk2.sock 00:05:45.533 07:24:49 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:45.533 07:24:49 -- common/autotest_common.sh@640 -- # local es=0 00:05:45.533 07:24:49 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3952402 /var/tmp/spdk2.sock 00:05:45.533 07:24:49 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:45.533 07:24:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:45.533 07:24:49 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:45.533 07:24:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:45.533 07:24:49 -- common/autotest_common.sh@643 -- # waitforlisten 3952402 /var/tmp/spdk2.sock 00:05:45.533 07:24:49 -- common/autotest_common.sh@819 -- # '[' -z 3952402 ']' 00:05:45.533 07:24:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.533 07:24:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:45.533 07:24:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.533 07:24:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:45.533 07:24:49 -- common/autotest_common.sh@10 -- # set +x 00:05:45.533 [2024-10-07 07:24:49.197388] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:45.533 [2024-10-07 07:24:49.197437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952402 ] 00:05:45.533 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.533 [2024-10-07 07:24:49.274158] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3952277 has claimed it. 00:05:45.533 [2024-10-07 07:24:49.274197] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:46.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3952402) - No such process 00:05:46.100 ERROR: process (pid: 3952402) is no longer running 00:05:46.100 07:24:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:46.100 07:24:49 -- common/autotest_common.sh@852 -- # return 1 00:05:46.100 07:24:49 -- common/autotest_common.sh@643 -- # es=1 00:05:46.100 07:24:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:46.100 07:24:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:46.100 07:24:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:46.100 07:24:49 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:46.100 07:24:49 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:46.100 07:24:49 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:46.100 07:24:49 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:46.100 07:24:49 -- event/cpu_locks.sh@141 -- # killprocess 3952277 00:05:46.100 07:24:49 -- common/autotest_common.sh@926 -- # '[' -z 3952277 ']' 00:05:46.100 07:24:49 -- common/autotest_common.sh@930 -- # kill -0 3952277 00:05:46.100 07:24:49 -- common/autotest_common.sh@931 -- # uname 00:05:46.100 07:24:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:46.100 07:24:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3952277 00:05:46.100 07:24:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:46.100 07:24:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:46.100 07:24:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3952277' 00:05:46.100 killing process with pid 3952277 00:05:46.100 07:24:49 -- common/autotest_common.sh@945 -- # kill 3952277 00:05:46.100 07:24:49 -- common/autotest_common.sh@950 -- # wait 3952277 00:05:46.358 00:05:46.358 real 0m1.944s 00:05:46.358 user 0m5.520s 00:05:46.358 sys 0m0.392s 00:05:46.358 07:24:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.358 07:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:46.358 ************************************ 00:05:46.358 END TEST locking_overlapped_coremask 00:05:46.358 ************************************ 00:05:46.358 07:24:50 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:46.358 07:24:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.358 07:24:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.358 07:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:46.358 ************************************ 00:05:46.358 START TEST locking_overlapped_coremask_via_rpc 00:05:46.358 ************************************ 00:05:46.358 07:24:50 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:46.358 07:24:50 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3952544 00:05:46.358 07:24:50 -- event/cpu_locks.sh@149 -- # waitforlisten 3952544 /var/tmp/spdk.sock 00:05:46.358 07:24:50 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:46.358 07:24:50 -- common/autotest_common.sh@819 -- # '[' -z 3952544 ']' 00:05:46.358 07:24:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.358 07:24:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.358 07:24:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.358 07:24:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.358 07:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:46.358 [2024-10-07 07:24:50.318504] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:46.358 [2024-10-07 07:24:50.318552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952544 ] 00:05:46.617 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.617 [2024-10-07 07:24:50.375964] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.617 [2024-10-07 07:24:50.375995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.617 [2024-10-07 07:24:50.452956] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.617 [2024-10-07 07:24:50.453112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.617 [2024-10-07 07:24:50.453210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.617 [2024-10-07 07:24:50.453212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.185 07:24:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.185 07:24:51 -- common/autotest_common.sh@852 -- # return 0 00:05:47.185 07:24:51 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3952774 00:05:47.185 07:24:51 -- event/cpu_locks.sh@153 -- # waitforlisten 3952774 /var/tmp/spdk2.sock 00:05:47.185 07:24:51 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:47.185 07:24:51 -- common/autotest_common.sh@819 -- # '[' -z 3952774 ']' 00:05:47.185 07:24:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.185 07:24:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:47.185 07:24:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.185 07:24:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:47.185 07:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:47.445 [2024-10-07 07:24:51.187527] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:47.445 [2024-10-07 07:24:51.187576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952774 ] 00:05:47.445 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.445 [2024-10-07 07:24:51.261880] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.445 [2024-10-07 07:24:51.261905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.445 [2024-10-07 07:24:51.405127] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.445 [2024-10-07 07:24:51.405277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.445 [2024-10-07 07:24:51.409105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.445 [2024-10-07 07:24:51.409107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:48.383 07:24:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.383 07:24:52 -- common/autotest_common.sh@852 -- # return 0 00:05:48.383 07:24:52 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:48.383 07:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.383 07:24:52 -- common/autotest_common.sh@10 -- # set +x 00:05:48.383 07:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.383 07:24:52 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:48.383 07:24:52 -- common/autotest_common.sh@640 -- # local es=0 00:05:48.383 07:24:52 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:48.383 07:24:52 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:48.383 07:24:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:48.383 07:24:52 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:48.383 07:24:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:48.383 07:24:52 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:48.383 07:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.383 07:24:52 -- common/autotest_common.sh@10 -- # set +x 00:05:48.383 [2024-10-07 07:24:52.029129] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3952544 has claimed it. 00:05:48.383 request: 00:05:48.383 { 00:05:48.383 "method": "framework_enable_cpumask_locks", 00:05:48.383 "req_id": 1 00:05:48.383 } 00:05:48.383 Got JSON-RPC error response 00:05:48.383 response: 00:05:48.383 { 00:05:48.383 "code": -32603, 00:05:48.383 "message": "Failed to claim CPU core: 2" 00:05:48.383 } 00:05:48.383 07:24:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:48.383 07:24:52 -- common/autotest_common.sh@643 -- # es=1 00:05:48.383 07:24:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:48.383 07:24:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:48.383 07:24:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:48.383 07:24:52 -- event/cpu_locks.sh@158 -- # waitforlisten 3952544 /var/tmp/spdk.sock 00:05:48.383 07:24:52 -- common/autotest_common.sh@819 -- # '[' -z 3952544 ']' 00:05:48.383 07:24:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.383 07:24:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.383 07:24:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.383 07:24:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.383 07:24:52 -- common/autotest_common.sh@10 -- # set +x 00:05:48.383 07:24:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.383 07:24:52 -- common/autotest_common.sh@852 -- # return 0 00:05:48.383 07:24:52 -- event/cpu_locks.sh@159 -- # waitforlisten 3952774 /var/tmp/spdk2.sock 00:05:48.383 07:24:52 -- common/autotest_common.sh@819 -- # '[' -z 3952774 ']' 00:05:48.383 07:24:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.383 07:24:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.383 07:24:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.383 07:24:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.383 07:24:52 -- common/autotest_common.sh@10 -- # set +x 00:05:48.643 07:24:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.643 07:24:52 -- common/autotest_common.sh@852 -- # return 0 00:05:48.643 07:24:52 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:48.643 07:24:52 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:48.643 07:24:52 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:48.643 07:24:52 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:48.643 00:05:48.643 real 0m2.148s 00:05:48.643 user 0m0.907s 00:05:48.643 sys 0m0.171s 00:05:48.643 07:24:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.643 07:24:52 -- common/autotest_common.sh@10 -- # set +x 00:05:48.643 ************************************ 00:05:48.643 END TEST locking_overlapped_coremask_via_rpc 00:05:48.643 ************************************ 00:05:48.643 07:24:52 -- event/cpu_locks.sh@174 -- # cleanup 00:05:48.643 07:24:52 -- event/cpu_locks.sh@15 -- # [[ -z 3952544 ]] 00:05:48.643 07:24:52 -- event/cpu_locks.sh@15 -- # killprocess 3952544 00:05:48.643 07:24:52 -- common/autotest_common.sh@926 -- # '[' -z 3952544 ']' 00:05:48.643 07:24:52 -- common/autotest_common.sh@930 -- # kill -0 3952544 00:05:48.643 07:24:52 -- common/autotest_common.sh@931 -- # uname 00:05:48.643 07:24:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:48.643 07:24:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3952544 00:05:48.643 07:24:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:48.643 07:24:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:48.643 07:24:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3952544' 00:05:48.643 killing process with pid 3952544 00:05:48.643 07:24:52 -- common/autotest_common.sh@945 -- # kill 3952544 00:05:48.643 07:24:52 -- common/autotest_common.sh@950 -- # wait 3952544 00:05:48.902 07:24:52 -- event/cpu_locks.sh@16 -- # [[ -z 3952774 ]] 00:05:48.902 07:24:52 -- event/cpu_locks.sh@16 -- # killprocess 3952774 00:05:48.902 07:24:52 -- common/autotest_common.sh@926 -- # '[' -z 3952774 ']' 00:05:48.902 07:24:52 -- common/autotest_common.sh@930 -- # kill -0 3952774 00:05:48.902 07:24:52 -- common/autotest_common.sh@931 -- # uname 00:05:48.902 07:24:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:48.902 07:24:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3952774 00:05:49.162 07:24:52 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:49.162 07:24:52 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:49.162 07:24:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3952774' 00:05:49.162 killing process with pid 3952774 00:05:49.162 07:24:52 -- common/autotest_common.sh@945 -- # kill 3952774 00:05:49.162 07:24:52 -- common/autotest_common.sh@950 -- # wait 3952774 00:05:49.422 07:24:53 -- event/cpu_locks.sh@18 -- # rm -f 00:05:49.422 07:24:53 -- event/cpu_locks.sh@1 -- # cleanup 00:05:49.422 07:24:53 -- event/cpu_locks.sh@15 -- # [[ -z 3952544 ]] 00:05:49.422 07:24:53 -- event/cpu_locks.sh@15 -- # killprocess 3952544 00:05:49.422 07:24:53 -- common/autotest_common.sh@926 -- # '[' -z 3952544 ']' 00:05:49.422 07:24:53 -- common/autotest_common.sh@930 -- # kill -0 3952544 00:05:49.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3952544) - No such process 00:05:49.422 07:24:53 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3952544 is not found' 00:05:49.422 Process with pid 3952544 is not found 00:05:49.422 07:24:53 -- event/cpu_locks.sh@16 -- # [[ -z 3952774 ]] 00:05:49.422 07:24:53 -- event/cpu_locks.sh@16 -- # killprocess 3952774 00:05:49.422 07:24:53 -- common/autotest_common.sh@926 -- # '[' -z 3952774 ']' 00:05:49.422 07:24:53 -- common/autotest_common.sh@930 -- # kill -0 3952774 00:05:49.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3952774) - No such process 00:05:49.422 07:24:53 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3952774 is not found' 00:05:49.422 Process with pid 3952774 is not found 00:05:49.422 07:24:53 -- event/cpu_locks.sh@18 -- # rm -f 00:05:49.422 00:05:49.422 real 0m17.657s 00:05:49.422 user 0m30.667s 00:05:49.422 sys 0m4.954s 00:05:49.422 07:24:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.422 07:24:53 -- common/autotest_common.sh@10 -- # set +x 00:05:49.422 ************************************ 00:05:49.422 END TEST cpu_locks 00:05:49.422 ************************************ 00:05:49.422 00:05:49.422 real 0m43.158s 00:05:49.422 user 1m23.310s 00:05:49.422 sys 0m8.204s 00:05:49.422 07:24:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.422 07:24:53 -- common/autotest_common.sh@10 -- # set +x 00:05:49.422 ************************************ 00:05:49.422 END TEST event 00:05:49.422 ************************************ 00:05:49.422 07:24:53 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:49.422 07:24:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.422 07:24:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.422 07:24:53 -- common/autotest_common.sh@10 -- # set +x 00:05:49.422 ************************************ 00:05:49.422 START TEST thread 00:05:49.422 ************************************ 00:05:49.422 07:24:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:49.682 * Looking for test storage... 00:05:49.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:49.682 07:24:53 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:49.682 07:24:53 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:49.682 07:24:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.682 07:24:53 -- common/autotest_common.sh@10 -- # set +x 00:05:49.682 ************************************ 00:05:49.682 START TEST thread_poller_perf 00:05:49.682 ************************************ 00:05:49.682 07:24:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:49.682 [2024-10-07 07:24:53.435662] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:49.682 [2024-10-07 07:24:53.435740] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953321 ] 00:05:49.682 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.682 [2024-10-07 07:24:53.495410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.682 [2024-10-07 07:24:53.565974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.682 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:51.063 ====================================== 00:05:51.063 busy:2108650878 (cyc) 00:05:51.063 total_run_count: 406000 00:05:51.063 tsc_hz: 2100000000 (cyc) 00:05:51.063 ====================================== 00:05:51.063 poller_cost: 5193 (cyc), 2472 (nsec) 00:05:51.063 00:05:51.063 real 0m1.247s 00:05:51.063 user 0m1.165s 00:05:51.063 sys 0m0.078s 00:05:51.063 07:24:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.063 07:24:54 -- common/autotest_common.sh@10 -- # set +x 00:05:51.063 ************************************ 00:05:51.063 END TEST thread_poller_perf 00:05:51.063 ************************************ 00:05:51.063 07:24:54 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:51.063 07:24:54 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:51.063 07:24:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.063 07:24:54 -- common/autotest_common.sh@10 -- # set +x 00:05:51.063 ************************************ 00:05:51.063 START TEST thread_poller_perf 00:05:51.063 ************************************ 00:05:51.063 07:24:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:51.063 [2024-10-07 07:24:54.704568] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:51.063 [2024-10-07 07:24:54.704620] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953543 ] 00:05:51.063 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.063 [2024-10-07 07:24:54.760759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.063 [2024-10-07 07:24:54.830031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.063 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:52.002 ====================================== 00:05:52.002 busy:2101748372 (cyc) 00:05:52.002 total_run_count: 5661000 00:05:52.002 tsc_hz: 2100000000 (cyc) 00:05:52.002 ====================================== 00:05:52.002 poller_cost: 371 (cyc), 176 (nsec) 00:05:52.002 00:05:52.002 real 0m1.226s 00:05:52.002 user 0m1.164s 00:05:52.002 sys 0m0.059s 00:05:52.002 07:24:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.002 07:24:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.002 ************************************ 00:05:52.002 END TEST thread_poller_perf 00:05:52.002 ************************************ 00:05:52.002 07:24:55 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:52.002 00:05:52.002 real 0m2.626s 00:05:52.002 user 0m2.393s 00:05:52.002 sys 0m0.243s 00:05:52.002 07:24:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.002 07:24:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.002 ************************************ 00:05:52.002 END TEST thread 00:05:52.002 ************************************ 00:05:52.262 07:24:55 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:52.262 07:24:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.262 07:24:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.262 07:24:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.262 ************************************ 00:05:52.262 START TEST accel 00:05:52.262 ************************************ 00:05:52.262 07:24:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:52.262 * Looking for test storage... 00:05:52.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:52.262 07:24:56 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:52.262 07:24:56 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:52.262 07:24:56 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:52.262 07:24:56 -- accel/accel.sh@59 -- # spdk_tgt_pid=3953849 00:05:52.262 07:24:56 -- accel/accel.sh@60 -- # waitforlisten 3953849 00:05:52.262 07:24:56 -- common/autotest_common.sh@819 -- # '[' -z 3953849 ']' 00:05:52.262 07:24:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.262 07:24:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.262 07:24:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.262 07:24:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.262 07:24:56 -- common/autotest_common.sh@10 -- # set +x 00:05:52.262 07:24:56 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:52.262 07:24:56 -- accel/accel.sh@58 -- # build_accel_config 00:05:52.262 07:24:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.262 07:24:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.262 07:24:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.262 07:24:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.262 07:24:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.262 07:24:56 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.262 07:24:56 -- accel/accel.sh@42 -- # jq -r . 00:05:52.262 [2024-10-07 07:24:56.122724] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:52.262 [2024-10-07 07:24:56.122774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953849 ] 00:05:52.262 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.262 [2024-10-07 07:24:56.177296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.521 [2024-10-07 07:24:56.252576] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.521 [2024-10-07 07:24:56.252684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.090 07:24:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:53.090 07:24:56 -- common/autotest_common.sh@852 -- # return 0 00:05:53.090 07:24:56 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:53.090 07:24:56 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:53.090 07:24:56 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:53.090 07:24:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.090 07:24:56 -- common/autotest_common.sh@10 -- # set +x 00:05:53.090 07:24:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.090 07:24:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # IFS== 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # read -r opc module 00:05:53.090 07:24:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:53.090 07:24:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # IFS== 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # read -r opc module 00:05:53.090 07:24:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:53.090 07:24:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # IFS== 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # read -r opc module 00:05:53.090 07:24:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:53.090 07:24:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # IFS== 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # read -r opc module 00:05:53.090 07:24:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:53.090 07:24:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # IFS== 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # read -r opc module 00:05:53.090 07:24:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:53.090 07:24:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # IFS== 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # read -r opc module 00:05:53.090 07:24:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:53.090 07:24:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # IFS== 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # read -r opc module 00:05:53.090 07:24:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:53.090 07:24:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # IFS== 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # read -r opc module 00:05:53.090 07:24:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:53.090 07:24:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # IFS== 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # read -r opc module 00:05:53.090 07:24:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:53.090 07:24:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # IFS== 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # read -r opc module 00:05:53.090 07:24:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:53.090 07:24:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # IFS== 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # read -r opc module 00:05:53.090 07:24:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:53.090 07:24:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # IFS== 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # read -r opc module 00:05:53.090 07:24:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:53.090 07:24:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # IFS== 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # read -r opc module 00:05:53.090 07:24:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:53.090 07:24:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # IFS== 00:05:53.090 07:24:56 -- accel/accel.sh@64 -- # read -r opc module 00:05:53.090 07:24:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:53.090 07:24:56 -- accel/accel.sh@67 -- # killprocess 3953849 00:05:53.090 07:24:56 -- common/autotest_common.sh@926 -- # '[' -z 3953849 ']' 00:05:53.090 07:24:56 -- common/autotest_common.sh@930 -- # kill -0 3953849 00:05:53.090 07:24:56 -- common/autotest_common.sh@931 -- # uname 00:05:53.090 07:24:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:53.090 07:24:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3953849 00:05:53.091 07:24:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:53.091 07:24:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:53.091 07:24:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3953849' 00:05:53.091 killing process with pid 3953849 00:05:53.091 07:24:57 -- common/autotest_common.sh@945 -- # kill 3953849 00:05:53.091 07:24:57 -- common/autotest_common.sh@950 -- # wait 3953849 00:05:53.660 07:24:57 -- accel/accel.sh@68 -- # trap - ERR 00:05:53.660 07:24:57 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:53.660 07:24:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:53.660 07:24:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.660 07:24:57 -- common/autotest_common.sh@10 -- # set +x 00:05:53.660 07:24:57 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:53.660 07:24:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:53.660 07:24:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.660 07:24:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.660 07:24:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.660 07:24:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.660 07:24:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.660 07:24:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.660 07:24:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.660 07:24:57 -- accel/accel.sh@42 -- # jq -r . 00:05:53.660 07:24:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.660 07:24:57 -- common/autotest_common.sh@10 -- # set +x 00:05:53.660 07:24:57 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:53.660 07:24:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:53.660 07:24:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.660 07:24:57 -- common/autotest_common.sh@10 -- # set +x 00:05:53.660 ************************************ 00:05:53.660 START TEST accel_missing_filename 00:05:53.660 ************************************ 00:05:53.660 07:24:57 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:53.660 07:24:57 -- common/autotest_common.sh@640 -- # local es=0 00:05:53.660 07:24:57 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:53.660 07:24:57 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:53.660 07:24:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:53.660 07:24:57 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:53.660 07:24:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:53.660 07:24:57 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:53.660 07:24:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:53.660 07:24:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.660 07:24:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.660 07:24:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.660 07:24:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.660 07:24:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.660 07:24:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.660 07:24:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.660 07:24:57 -- accel/accel.sh@42 -- # jq -r . 00:05:53.660 [2024-10-07 07:24:57.434295] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:53.660 [2024-10-07 07:24:57.434373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954102 ] 00:05:53.660 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.660 [2024-10-07 07:24:57.493251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.660 [2024-10-07 07:24:57.560504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.660 [2024-10-07 07:24:57.600739] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:53.920 [2024-10-07 07:24:57.660463] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:53.920 A filename is required. 00:05:53.920 07:24:57 -- common/autotest_common.sh@643 -- # es=234 00:05:53.920 07:24:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:53.920 07:24:57 -- common/autotest_common.sh@652 -- # es=106 00:05:53.920 07:24:57 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:53.920 07:24:57 -- common/autotest_common.sh@660 -- # es=1 00:05:53.920 07:24:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:53.920 00:05:53.920 real 0m0.347s 00:05:53.920 user 0m0.267s 00:05:53.920 sys 0m0.121s 00:05:53.920 07:24:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.920 07:24:57 -- common/autotest_common.sh@10 -- # set +x 00:05:53.920 ************************************ 00:05:53.920 END TEST accel_missing_filename 00:05:53.920 ************************************ 00:05:53.920 07:24:57 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:53.920 07:24:57 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:53.920 07:24:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.920 07:24:57 -- common/autotest_common.sh@10 -- # set +x 00:05:53.920 ************************************ 00:05:53.920 START TEST accel_compress_verify 00:05:53.920 ************************************ 00:05:53.920 07:24:57 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:53.920 07:24:57 -- common/autotest_common.sh@640 -- # local es=0 00:05:53.920 07:24:57 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:53.920 07:24:57 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:53.920 07:24:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:53.920 07:24:57 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:53.920 07:24:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:53.920 07:24:57 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:53.920 07:24:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:53.920 07:24:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.920 07:24:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.920 07:24:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.920 07:24:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.920 07:24:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.920 07:24:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.920 07:24:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.920 07:24:57 -- accel/accel.sh@42 -- # jq -r . 00:05:53.920 [2024-10-07 07:24:57.819637] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:53.920 [2024-10-07 07:24:57.819709] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954134 ] 00:05:53.920 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.920 [2024-10-07 07:24:57.878191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.179 [2024-10-07 07:24:57.942276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.179 [2024-10-07 07:24:57.982321] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:54.179 [2024-10-07 07:24:58.041907] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:54.179 00:05:54.179 Compression does not support the verify option, aborting. 00:05:54.179 07:24:58 -- common/autotest_common.sh@643 -- # es=161 00:05:54.179 07:24:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:54.179 07:24:58 -- common/autotest_common.sh@652 -- # es=33 00:05:54.179 07:24:58 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:54.179 07:24:58 -- common/autotest_common.sh@660 -- # es=1 00:05:54.179 07:24:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:54.179 00:05:54.179 real 0m0.343s 00:05:54.179 user 0m0.261s 00:05:54.179 sys 0m0.122s 00:05:54.179 07:24:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.179 07:24:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.179 ************************************ 00:05:54.179 END TEST accel_compress_verify 00:05:54.179 ************************************ 00:05:54.439 07:24:58 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:54.439 07:24:58 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:54.439 07:24:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.439 07:24:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.439 ************************************ 00:05:54.439 START TEST accel_wrong_workload 00:05:54.439 ************************************ 00:05:54.439 07:24:58 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:54.439 07:24:58 -- common/autotest_common.sh@640 -- # local es=0 00:05:54.439 07:24:58 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:54.439 07:24:58 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:54.439 07:24:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:54.439 07:24:58 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:54.439 07:24:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:54.439 07:24:58 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:54.439 07:24:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:54.439 07:24:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.439 07:24:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.439 07:24:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.439 07:24:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.439 07:24:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.439 07:24:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.439 07:24:58 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.439 07:24:58 -- accel/accel.sh@42 -- # jq -r . 00:05:54.439 Unsupported workload type: foobar 00:05:54.440 [2024-10-07 07:24:58.191469] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:54.440 accel_perf options: 00:05:54.440 [-h help message] 00:05:54.440 [-q queue depth per core] 00:05:54.440 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:54.440 [-T number of threads per core 00:05:54.440 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:54.440 [-t time in seconds] 00:05:54.440 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:54.440 [ dif_verify, , dif_generate, dif_generate_copy 00:05:54.440 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:54.440 [-l for compress/decompress workloads, name of uncompressed input file 00:05:54.440 [-S for crc32c workload, use this seed value (default 0) 00:05:54.440 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:54.440 [-f for fill workload, use this BYTE value (default 255) 00:05:54.440 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:54.440 [-y verify result if this switch is on] 00:05:54.440 [-a tasks to allocate per core (default: same value as -q)] 00:05:54.440 Can be used to spread operations across a wider range of memory. 00:05:54.440 07:24:58 -- common/autotest_common.sh@643 -- # es=1 00:05:54.440 07:24:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:54.440 07:24:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:54.440 07:24:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:54.440 00:05:54.440 real 0m0.031s 00:05:54.440 user 0m0.019s 00:05:54.440 sys 0m0.012s 00:05:54.440 07:24:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.440 07:24:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.440 ************************************ 00:05:54.440 END TEST accel_wrong_workload 00:05:54.440 ************************************ 00:05:54.440 Error: writing output failed: Broken pipe 00:05:54.440 07:24:58 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:54.440 07:24:58 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:54.440 07:24:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.440 07:24:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.440 ************************************ 00:05:54.440 START TEST accel_negative_buffers 00:05:54.440 ************************************ 00:05:54.440 07:24:58 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:54.440 07:24:58 -- common/autotest_common.sh@640 -- # local es=0 00:05:54.440 07:24:58 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:54.440 07:24:58 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:54.440 07:24:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:54.440 07:24:58 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:54.440 07:24:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:54.440 07:24:58 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:54.440 07:24:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:54.440 07:24:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.440 07:24:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.440 07:24:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.440 07:24:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.440 07:24:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.440 07:24:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.440 07:24:58 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.440 07:24:58 -- accel/accel.sh@42 -- # jq -r . 00:05:54.440 -x option must be non-negative. 00:05:54.440 [2024-10-07 07:24:58.262710] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:54.440 accel_perf options: 00:05:54.440 [-h help message] 00:05:54.440 [-q queue depth per core] 00:05:54.440 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:54.440 [-T number of threads per core 00:05:54.440 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:54.440 [-t time in seconds] 00:05:54.440 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:54.440 [ dif_verify, , dif_generate, dif_generate_copy 00:05:54.440 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:54.440 [-l for compress/decompress workloads, name of uncompressed input file 00:05:54.440 [-S for crc32c workload, use this seed value (default 0) 00:05:54.440 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:54.440 [-f for fill workload, use this BYTE value (default 255) 00:05:54.440 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:54.440 [-y verify result if this switch is on] 00:05:54.440 [-a tasks to allocate per core (default: same value as -q)] 00:05:54.440 Can be used to spread operations across a wider range of memory. 00:05:54.440 07:24:58 -- common/autotest_common.sh@643 -- # es=1 00:05:54.440 07:24:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:54.440 07:24:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:54.440 07:24:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:54.440 00:05:54.440 real 0m0.034s 00:05:54.440 user 0m0.023s 00:05:54.440 sys 0m0.011s 00:05:54.440 07:24:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.440 07:24:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.440 ************************************ 00:05:54.440 END TEST accel_negative_buffers 00:05:54.440 ************************************ 00:05:54.440 Error: writing output failed: Broken pipe 00:05:54.440 07:24:58 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:54.440 07:24:58 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:54.440 07:24:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.440 07:24:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.440 ************************************ 00:05:54.440 START TEST accel_crc32c 00:05:54.440 ************************************ 00:05:54.440 07:24:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:54.440 07:24:58 -- accel/accel.sh@16 -- # local accel_opc 00:05:54.440 07:24:58 -- accel/accel.sh@17 -- # local accel_module 00:05:54.440 07:24:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:54.440 07:24:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:54.440 07:24:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.440 07:24:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.440 07:24:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.440 07:24:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.440 07:24:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.440 07:24:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.440 07:24:58 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.440 07:24:58 -- accel/accel.sh@42 -- # jq -r . 00:05:54.440 [2024-10-07 07:24:58.334252] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:54.440 [2024-10-07 07:24:58.334320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954194 ] 00:05:54.440 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.440 [2024-10-07 07:24:58.393062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.700 [2024-10-07 07:24:58.467804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.079 07:24:59 -- accel/accel.sh@18 -- # out=' 00:05:56.079 SPDK Configuration: 00:05:56.079 Core mask: 0x1 00:05:56.079 00:05:56.079 Accel Perf Configuration: 00:05:56.079 Workload Type: crc32c 00:05:56.079 CRC-32C seed: 32 00:05:56.079 Transfer size: 4096 bytes 00:05:56.079 Vector count 1 00:05:56.079 Module: software 00:05:56.079 Queue depth: 32 00:05:56.079 Allocate depth: 32 00:05:56.079 # threads/core: 1 00:05:56.079 Run time: 1 seconds 00:05:56.079 Verify: Yes 00:05:56.079 00:05:56.079 Running for 1 seconds... 00:05:56.079 00:05:56.079 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:56.079 ------------------------------------------------------------------------------------ 00:05:56.079 0,0 584448/s 2283 MiB/s 0 0 00:05:56.079 ==================================================================================== 00:05:56.079 Total 584448/s 2283 MiB/s 0 0' 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:56.079 07:24:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:56.079 07:24:59 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.079 07:24:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.079 07:24:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.079 07:24:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.079 07:24:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.079 07:24:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.079 07:24:59 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.079 07:24:59 -- accel/accel.sh@42 -- # jq -r . 00:05:56.079 [2024-10-07 07:24:59.689800] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:56.079 [2024-10-07 07:24:59.689878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954426 ] 00:05:56.079 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.079 [2024-10-07 07:24:59.746704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.079 [2024-10-07 07:24:59.812974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val= 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val= 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val=0x1 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val= 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val= 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val=crc32c 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val=32 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val= 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val=software 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@23 -- # accel_module=software 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val=32 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val=32 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val=1 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val=Yes 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val= 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:56.079 07:24:59 -- accel/accel.sh@21 -- # val= 00:05:56.079 07:24:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # IFS=: 00:05:56.079 07:24:59 -- accel/accel.sh@20 -- # read -r var val 00:05:57.460 07:25:01 -- accel/accel.sh@21 -- # val= 00:05:57.460 07:25:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.460 07:25:01 -- accel/accel.sh@20 -- # IFS=: 00:05:57.460 07:25:01 -- accel/accel.sh@20 -- # read -r var val 00:05:57.460 07:25:01 -- accel/accel.sh@21 -- # val= 00:05:57.460 07:25:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.460 07:25:01 -- accel/accel.sh@20 -- # IFS=: 00:05:57.460 07:25:01 -- accel/accel.sh@20 -- # read -r var val 00:05:57.460 07:25:01 -- accel/accel.sh@21 -- # val= 00:05:57.460 07:25:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.460 07:25:01 -- accel/accel.sh@20 -- # IFS=: 00:05:57.460 07:25:01 -- accel/accel.sh@20 -- # read -r var val 00:05:57.460 07:25:01 -- accel/accel.sh@21 -- # val= 00:05:57.460 07:25:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.460 07:25:01 -- accel/accel.sh@20 -- # IFS=: 00:05:57.460 07:25:01 -- accel/accel.sh@20 -- # read -r var val 00:05:57.460 07:25:01 -- accel/accel.sh@21 -- # val= 00:05:57.460 07:25:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.460 07:25:01 -- accel/accel.sh@20 -- # IFS=: 00:05:57.460 07:25:01 -- accel/accel.sh@20 -- # read -r var val 00:05:57.460 07:25:01 -- accel/accel.sh@21 -- # val= 00:05:57.460 07:25:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.460 07:25:01 -- accel/accel.sh@20 -- # IFS=: 00:05:57.460 07:25:01 -- accel/accel.sh@20 -- # read -r var val 00:05:57.460 07:25:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:57.460 07:25:01 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:57.460 07:25:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.460 00:05:57.460 real 0m2.701s 00:05:57.460 user 0m2.470s 00:05:57.460 sys 0m0.227s 00:05:57.460 07:25:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.460 07:25:01 -- common/autotest_common.sh@10 -- # set +x 00:05:57.460 ************************************ 00:05:57.460 END TEST accel_crc32c 00:05:57.460 ************************************ 00:05:57.460 07:25:01 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:57.460 07:25:01 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:57.460 07:25:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.460 07:25:01 -- common/autotest_common.sh@10 -- # set +x 00:05:57.460 ************************************ 00:05:57.460 START TEST accel_crc32c_C2 00:05:57.460 ************************************ 00:05:57.460 07:25:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:57.460 07:25:01 -- accel/accel.sh@16 -- # local accel_opc 00:05:57.460 07:25:01 -- accel/accel.sh@17 -- # local accel_module 00:05:57.460 07:25:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:57.460 07:25:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:57.460 07:25:01 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.460 07:25:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.460 07:25:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.460 07:25:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.460 07:25:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.460 07:25:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.460 07:25:01 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.460 07:25:01 -- accel/accel.sh@42 -- # jq -r . 00:05:57.460 [2024-10-07 07:25:01.066588] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:57.460 [2024-10-07 07:25:01.066663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954670 ] 00:05:57.460 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.460 [2024-10-07 07:25:01.122739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.460 [2024-10-07 07:25:01.190662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.840 07:25:02 -- accel/accel.sh@18 -- # out=' 00:05:58.840 SPDK Configuration: 00:05:58.840 Core mask: 0x1 00:05:58.840 00:05:58.840 Accel Perf Configuration: 00:05:58.840 Workload Type: crc32c 00:05:58.840 CRC-32C seed: 0 00:05:58.840 Transfer size: 4096 bytes 00:05:58.840 Vector count 2 00:05:58.840 Module: software 00:05:58.840 Queue depth: 32 00:05:58.840 Allocate depth: 32 00:05:58.840 # threads/core: 1 00:05:58.840 Run time: 1 seconds 00:05:58.840 Verify: Yes 00:05:58.840 00:05:58.840 Running for 1 seconds... 00:05:58.840 00:05:58.840 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:58.840 ------------------------------------------------------------------------------------ 00:05:58.840 0,0 466016/s 3640 MiB/s 0 0 00:05:58.840 ==================================================================================== 00:05:58.840 Total 466016/s 1820 MiB/s 0 0' 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:58.840 07:25:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:58.840 07:25:02 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.840 07:25:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.840 07:25:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.840 07:25:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.840 07:25:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.840 07:25:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.840 07:25:02 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.840 07:25:02 -- accel/accel.sh@42 -- # jq -r . 00:05:58.840 [2024-10-07 07:25:02.412710] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:58.840 [2024-10-07 07:25:02.412788] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954898 ] 00:05:58.840 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.840 [2024-10-07 07:25:02.469372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.840 [2024-10-07 07:25:02.535719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val= 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val= 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val=0x1 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val= 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val= 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val=crc32c 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.840 07:25:02 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val=0 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val= 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val=software 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.840 07:25:02 -- accel/accel.sh@23 -- # accel_module=software 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val=32 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val=32 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val=1 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val=Yes 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.840 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.840 07:25:02 -- accel/accel.sh@21 -- # val= 00:05:58.840 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.841 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.841 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:58.841 07:25:02 -- accel/accel.sh@21 -- # val= 00:05:58.841 07:25:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.841 07:25:02 -- accel/accel.sh@20 -- # IFS=: 00:05:58.841 07:25:02 -- accel/accel.sh@20 -- # read -r var val 00:05:59.786 07:25:03 -- accel/accel.sh@21 -- # val= 00:05:59.786 07:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.786 07:25:03 -- accel/accel.sh@20 -- # IFS=: 00:05:59.786 07:25:03 -- accel/accel.sh@20 -- # read -r var val 00:05:59.786 07:25:03 -- accel/accel.sh@21 -- # val= 00:05:59.786 07:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.786 07:25:03 -- accel/accel.sh@20 -- # IFS=: 00:05:59.786 07:25:03 -- accel/accel.sh@20 -- # read -r var val 00:05:59.786 07:25:03 -- accel/accel.sh@21 -- # val= 00:05:59.786 07:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.786 07:25:03 -- accel/accel.sh@20 -- # IFS=: 00:05:59.786 07:25:03 -- accel/accel.sh@20 -- # read -r var val 00:05:59.786 07:25:03 -- accel/accel.sh@21 -- # val= 00:05:59.786 07:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.786 07:25:03 -- accel/accel.sh@20 -- # IFS=: 00:05:59.786 07:25:03 -- accel/accel.sh@20 -- # read -r var val 00:05:59.786 07:25:03 -- accel/accel.sh@21 -- # val= 00:05:59.786 07:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.786 07:25:03 -- accel/accel.sh@20 -- # IFS=: 00:05:59.786 07:25:03 -- accel/accel.sh@20 -- # read -r var val 00:05:59.786 07:25:03 -- accel/accel.sh@21 -- # val= 00:05:59.786 07:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.786 07:25:03 -- accel/accel.sh@20 -- # IFS=: 00:05:59.786 07:25:03 -- accel/accel.sh@20 -- # read -r var val 00:05:59.786 07:25:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:59.786 07:25:03 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:59.786 07:25:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.786 00:05:59.786 real 0m2.691s 00:05:59.786 user 0m2.468s 00:05:59.786 sys 0m0.220s 00:05:59.786 07:25:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.786 07:25:03 -- common/autotest_common.sh@10 -- # set +x 00:05:59.786 ************************************ 00:05:59.786 END TEST accel_crc32c_C2 00:05:59.786 ************************************ 00:06:00.046 07:25:03 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:00.046 07:25:03 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:00.046 07:25:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.046 07:25:03 -- common/autotest_common.sh@10 -- # set +x 00:06:00.046 ************************************ 00:06:00.046 START TEST accel_copy 00:06:00.046 ************************************ 00:06:00.046 07:25:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:00.046 07:25:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.046 07:25:03 -- accel/accel.sh@17 -- # local accel_module 00:06:00.046 07:25:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:00.046 07:25:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:00.046 07:25:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.046 07:25:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.046 07:25:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.046 07:25:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.046 07:25:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.046 07:25:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.046 07:25:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.046 07:25:03 -- accel/accel.sh@42 -- # jq -r . 00:06:00.046 [2024-10-07 07:25:03.790801] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:00.046 [2024-10-07 07:25:03.790857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3955148 ] 00:06:00.046 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.046 [2024-10-07 07:25:03.845815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.046 [2024-10-07 07:25:03.915258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.425 07:25:05 -- accel/accel.sh@18 -- # out=' 00:06:01.425 SPDK Configuration: 00:06:01.425 Core mask: 0x1 00:06:01.425 00:06:01.425 Accel Perf Configuration: 00:06:01.425 Workload Type: copy 00:06:01.425 Transfer size: 4096 bytes 00:06:01.425 Vector count 1 00:06:01.425 Module: software 00:06:01.425 Queue depth: 32 00:06:01.425 Allocate depth: 32 00:06:01.425 # threads/core: 1 00:06:01.425 Run time: 1 seconds 00:06:01.425 Verify: Yes 00:06:01.425 00:06:01.425 Running for 1 seconds... 00:06:01.425 00:06:01.425 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:01.425 ------------------------------------------------------------------------------------ 00:06:01.425 0,0 437664/s 1709 MiB/s 0 0 00:06:01.425 ==================================================================================== 00:06:01.425 Total 437664/s 1709 MiB/s 0 0' 00:06:01.425 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.425 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.425 07:25:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:01.425 07:25:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:01.425 07:25:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.425 07:25:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.425 07:25:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.425 07:25:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.425 07:25:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.425 07:25:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.425 07:25:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.425 07:25:05 -- accel/accel.sh@42 -- # jq -r . 00:06:01.425 [2024-10-07 07:25:05.125552] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:01.426 [2024-10-07 07:25:05.125600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3955374 ] 00:06:01.426 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.426 [2024-10-07 07:25:05.179314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.426 [2024-10-07 07:25:05.245204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val= 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val= 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val=0x1 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val= 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val= 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val=copy 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val= 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val=software 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@23 -- # accel_module=software 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val=32 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val=32 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val=1 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val=Yes 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val= 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:01.426 07:25:05 -- accel/accel.sh@21 -- # val= 00:06:01.426 07:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # IFS=: 00:06:01.426 07:25:05 -- accel/accel.sh@20 -- # read -r var val 00:06:02.805 07:25:06 -- accel/accel.sh@21 -- # val= 00:06:02.805 07:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.805 07:25:06 -- accel/accel.sh@20 -- # IFS=: 00:06:02.805 07:25:06 -- accel/accel.sh@20 -- # read -r var val 00:06:02.805 07:25:06 -- accel/accel.sh@21 -- # val= 00:06:02.805 07:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.805 07:25:06 -- accel/accel.sh@20 -- # IFS=: 00:06:02.805 07:25:06 -- accel/accel.sh@20 -- # read -r var val 00:06:02.805 07:25:06 -- accel/accel.sh@21 -- # val= 00:06:02.805 07:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.805 07:25:06 -- accel/accel.sh@20 -- # IFS=: 00:06:02.805 07:25:06 -- accel/accel.sh@20 -- # read -r var val 00:06:02.805 07:25:06 -- accel/accel.sh@21 -- # val= 00:06:02.805 07:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.805 07:25:06 -- accel/accel.sh@20 -- # IFS=: 00:06:02.805 07:25:06 -- accel/accel.sh@20 -- # read -r var val 00:06:02.805 07:25:06 -- accel/accel.sh@21 -- # val= 00:06:02.805 07:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.805 07:25:06 -- accel/accel.sh@20 -- # IFS=: 00:06:02.805 07:25:06 -- accel/accel.sh@20 -- # read -r var val 00:06:02.805 07:25:06 -- accel/accel.sh@21 -- # val= 00:06:02.805 07:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.805 07:25:06 -- accel/accel.sh@20 -- # IFS=: 00:06:02.805 07:25:06 -- accel/accel.sh@20 -- # read -r var val 00:06:02.805 07:25:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:02.805 07:25:06 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:02.805 07:25:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.805 00:06:02.805 real 0m2.675s 00:06:02.805 user 0m2.463s 00:06:02.805 sys 0m0.209s 00:06:02.805 07:25:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.805 07:25:06 -- common/autotest_common.sh@10 -- # set +x 00:06:02.805 ************************************ 00:06:02.805 END TEST accel_copy 00:06:02.805 ************************************ 00:06:02.805 07:25:06 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.805 07:25:06 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:02.805 07:25:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.805 07:25:06 -- common/autotest_common.sh@10 -- # set +x 00:06:02.805 ************************************ 00:06:02.805 START TEST accel_fill 00:06:02.805 ************************************ 00:06:02.805 07:25:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.805 07:25:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.805 07:25:06 -- accel/accel.sh@17 -- # local accel_module 00:06:02.805 07:25:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.805 07:25:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:02.805 07:25:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.805 07:25:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.805 07:25:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.805 07:25:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.805 07:25:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.805 07:25:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.805 07:25:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.805 07:25:06 -- accel/accel.sh@42 -- # jq -r . 00:06:02.805 [2024-10-07 07:25:06.493645] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:02.805 [2024-10-07 07:25:06.493704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3955615 ] 00:06:02.805 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.805 [2024-10-07 07:25:06.548171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.805 [2024-10-07 07:25:06.617468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.183 07:25:07 -- accel/accel.sh@18 -- # out=' 00:06:04.183 SPDK Configuration: 00:06:04.183 Core mask: 0x1 00:06:04.183 00:06:04.183 Accel Perf Configuration: 00:06:04.183 Workload Type: fill 00:06:04.183 Fill pattern: 0x80 00:06:04.183 Transfer size: 4096 bytes 00:06:04.183 Vector count 1 00:06:04.183 Module: software 00:06:04.183 Queue depth: 64 00:06:04.183 Allocate depth: 64 00:06:04.183 # threads/core: 1 00:06:04.183 Run time: 1 seconds 00:06:04.183 Verify: Yes 00:06:04.183 00:06:04.183 Running for 1 seconds... 00:06:04.183 00:06:04.183 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:04.183 ------------------------------------------------------------------------------------ 00:06:04.183 0,0 670336/s 2618 MiB/s 0 0 00:06:04.183 ==================================================================================== 00:06:04.183 Total 670336/s 2618 MiB/s 0 0' 00:06:04.183 07:25:07 -- accel/accel.sh@20 -- # IFS=: 00:06:04.183 07:25:07 -- accel/accel.sh@20 -- # read -r var val 00:06:04.183 07:25:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:04.183 07:25:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:04.183 07:25:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.183 07:25:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.183 07:25:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.183 07:25:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.183 07:25:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.183 07:25:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.183 07:25:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.183 07:25:07 -- accel/accel.sh@42 -- # jq -r . 00:06:04.183 [2024-10-07 07:25:07.837588] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:04.183 [2024-10-07 07:25:07.837647] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3955852 ] 00:06:04.183 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.183 [2024-10-07 07:25:07.892603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.183 [2024-10-07 07:25:07.960928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.183 07:25:08 -- accel/accel.sh@21 -- # val= 00:06:04.183 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.183 07:25:08 -- accel/accel.sh@21 -- # val= 00:06:04.183 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.183 07:25:08 -- accel/accel.sh@21 -- # val=0x1 00:06:04.183 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.183 07:25:08 -- accel/accel.sh@21 -- # val= 00:06:04.183 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.183 07:25:08 -- accel/accel.sh@21 -- # val= 00:06:04.183 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.183 07:25:08 -- accel/accel.sh@21 -- # val=fill 00:06:04.183 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.183 07:25:08 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.183 07:25:08 -- accel/accel.sh@21 -- # val=0x80 00:06:04.183 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.183 07:25:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:04.183 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.183 07:25:08 -- accel/accel.sh@21 -- # val= 00:06:04.183 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.183 07:25:08 -- accel/accel.sh@21 -- # val=software 00:06:04.183 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.183 07:25:08 -- accel/accel.sh@23 -- # accel_module=software 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.183 07:25:08 -- accel/accel.sh@21 -- # val=64 00:06:04.183 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.183 07:25:08 -- accel/accel.sh@21 -- # val=64 00:06:04.183 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.183 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.183 07:25:08 -- accel/accel.sh@21 -- # val=1 00:06:04.183 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.184 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.184 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.184 07:25:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:04.184 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.184 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.184 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.184 07:25:08 -- accel/accel.sh@21 -- # val=Yes 00:06:04.184 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.184 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.184 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.184 07:25:08 -- accel/accel.sh@21 -- # val= 00:06:04.184 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.184 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.184 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:04.184 07:25:08 -- accel/accel.sh@21 -- # val= 00:06:04.184 07:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.184 07:25:08 -- accel/accel.sh@20 -- # IFS=: 00:06:04.184 07:25:08 -- accel/accel.sh@20 -- # read -r var val 00:06:05.568 07:25:09 -- accel/accel.sh@21 -- # val= 00:06:05.568 07:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.568 07:25:09 -- accel/accel.sh@20 -- # IFS=: 00:06:05.568 07:25:09 -- accel/accel.sh@20 -- # read -r var val 00:06:05.568 07:25:09 -- accel/accel.sh@21 -- # val= 00:06:05.568 07:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.568 07:25:09 -- accel/accel.sh@20 -- # IFS=: 00:06:05.568 07:25:09 -- accel/accel.sh@20 -- # read -r var val 00:06:05.568 07:25:09 -- accel/accel.sh@21 -- # val= 00:06:05.568 07:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.568 07:25:09 -- accel/accel.sh@20 -- # IFS=: 00:06:05.568 07:25:09 -- accel/accel.sh@20 -- # read -r var val 00:06:05.568 07:25:09 -- accel/accel.sh@21 -- # val= 00:06:05.568 07:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.568 07:25:09 -- accel/accel.sh@20 -- # IFS=: 00:06:05.568 07:25:09 -- accel/accel.sh@20 -- # read -r var val 00:06:05.568 07:25:09 -- accel/accel.sh@21 -- # val= 00:06:05.568 07:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.568 07:25:09 -- accel/accel.sh@20 -- # IFS=: 00:06:05.568 07:25:09 -- accel/accel.sh@20 -- # read -r var val 00:06:05.568 07:25:09 -- accel/accel.sh@21 -- # val= 00:06:05.568 07:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.568 07:25:09 -- accel/accel.sh@20 -- # IFS=: 00:06:05.568 07:25:09 -- accel/accel.sh@20 -- # read -r var val 00:06:05.568 07:25:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:05.568 07:25:09 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:05.568 07:25:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.568 00:06:05.568 real 0m2.687s 00:06:05.568 user 0m2.467s 00:06:05.568 sys 0m0.218s 00:06:05.568 07:25:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.568 07:25:09 -- common/autotest_common.sh@10 -- # set +x 00:06:05.568 ************************************ 00:06:05.569 END TEST accel_fill 00:06:05.569 ************************************ 00:06:05.569 07:25:09 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:05.569 07:25:09 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:05.569 07:25:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.569 07:25:09 -- common/autotest_common.sh@10 -- # set +x 00:06:05.569 ************************************ 00:06:05.569 START TEST accel_copy_crc32c 00:06:05.569 ************************************ 00:06:05.569 07:25:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:05.569 07:25:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.569 07:25:09 -- accel/accel.sh@17 -- # local accel_module 00:06:05.569 07:25:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:05.569 07:25:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:05.569 07:25:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.569 07:25:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.569 07:25:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.569 07:25:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.569 07:25:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.569 07:25:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.569 07:25:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.569 07:25:09 -- accel/accel.sh@42 -- # jq -r . 00:06:05.569 [2024-10-07 07:25:09.212795] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:05.569 [2024-10-07 07:25:09.212854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3956093 ] 00:06:05.569 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.569 [2024-10-07 07:25:09.267567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.569 [2024-10-07 07:25:09.339910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.949 07:25:10 -- accel/accel.sh@18 -- # out=' 00:06:06.949 SPDK Configuration: 00:06:06.949 Core mask: 0x1 00:06:06.949 00:06:06.949 Accel Perf Configuration: 00:06:06.949 Workload Type: copy_crc32c 00:06:06.949 CRC-32C seed: 0 00:06:06.949 Vector size: 4096 bytes 00:06:06.949 Transfer size: 4096 bytes 00:06:06.949 Vector count 1 00:06:06.949 Module: software 00:06:06.949 Queue depth: 32 00:06:06.949 Allocate depth: 32 00:06:06.949 # threads/core: 1 00:06:06.949 Run time: 1 seconds 00:06:06.949 Verify: Yes 00:06:06.949 00:06:06.949 Running for 1 seconds... 00:06:06.949 00:06:06.949 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:06.949 ------------------------------------------------------------------------------------ 00:06:06.949 0,0 322240/s 1258 MiB/s 0 0 00:06:06.949 ==================================================================================== 00:06:06.949 Total 322240/s 1258 MiB/s 0 0' 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:06.949 07:25:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:06.949 07:25:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.949 07:25:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.949 07:25:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.949 07:25:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.949 07:25:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.949 07:25:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.949 07:25:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.949 07:25:10 -- accel/accel.sh@42 -- # jq -r . 00:06:06.949 [2024-10-07 07:25:10.551905] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:06.949 [2024-10-07 07:25:10.551964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3956325 ] 00:06:06.949 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.949 [2024-10-07 07:25:10.605687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.949 [2024-10-07 07:25:10.671300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val= 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val= 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val=0x1 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val= 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val= 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val=0 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val= 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val=software 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@23 -- # accel_module=software 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val=32 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val=32 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val=1 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val=Yes 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val= 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:06.949 07:25:10 -- accel/accel.sh@21 -- # val= 00:06:06.949 07:25:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # IFS=: 00:06:06.949 07:25:10 -- accel/accel.sh@20 -- # read -r var val 00:06:08.329 07:25:11 -- accel/accel.sh@21 -- # val= 00:06:08.329 07:25:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.329 07:25:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.329 07:25:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.329 07:25:11 -- accel/accel.sh@21 -- # val= 00:06:08.329 07:25:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.329 07:25:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.329 07:25:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.329 07:25:11 -- accel/accel.sh@21 -- # val= 00:06:08.329 07:25:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.329 07:25:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.329 07:25:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.330 07:25:11 -- accel/accel.sh@21 -- # val= 00:06:08.330 07:25:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.330 07:25:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.330 07:25:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.330 07:25:11 -- accel/accel.sh@21 -- # val= 00:06:08.330 07:25:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.330 07:25:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.330 07:25:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.330 07:25:11 -- accel/accel.sh@21 -- # val= 00:06:08.330 07:25:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.330 07:25:11 -- accel/accel.sh@20 -- # IFS=: 00:06:08.330 07:25:11 -- accel/accel.sh@20 -- # read -r var val 00:06:08.330 07:25:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:08.330 07:25:11 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:08.330 07:25:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.330 00:06:08.330 real 0m2.677s 00:06:08.330 user 0m2.464s 00:06:08.330 sys 0m0.209s 00:06:08.330 07:25:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.330 07:25:11 -- common/autotest_common.sh@10 -- # set +x 00:06:08.330 ************************************ 00:06:08.330 END TEST accel_copy_crc32c 00:06:08.330 ************************************ 00:06:08.330 07:25:11 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:08.330 07:25:11 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:08.330 07:25:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.330 07:25:11 -- common/autotest_common.sh@10 -- # set +x 00:06:08.330 ************************************ 00:06:08.330 START TEST accel_copy_crc32c_C2 00:06:08.330 ************************************ 00:06:08.330 07:25:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:08.330 07:25:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.330 07:25:11 -- accel/accel.sh@17 -- # local accel_module 00:06:08.330 07:25:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:08.330 07:25:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:08.330 07:25:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.330 07:25:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.330 07:25:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.330 07:25:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.330 07:25:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.330 07:25:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.330 07:25:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.330 07:25:11 -- accel/accel.sh@42 -- # jq -r . 00:06:08.330 [2024-10-07 07:25:11.922053] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:08.330 [2024-10-07 07:25:11.922113] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3956572 ] 00:06:08.330 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.330 [2024-10-07 07:25:11.977339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.330 [2024-10-07 07:25:12.044572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.711 07:25:13 -- accel/accel.sh@18 -- # out=' 00:06:09.711 SPDK Configuration: 00:06:09.711 Core mask: 0x1 00:06:09.711 00:06:09.711 Accel Perf Configuration: 00:06:09.711 Workload Type: copy_crc32c 00:06:09.711 CRC-32C seed: 0 00:06:09.711 Vector size: 4096 bytes 00:06:09.711 Transfer size: 8192 bytes 00:06:09.711 Vector count 2 00:06:09.711 Module: software 00:06:09.711 Queue depth: 32 00:06:09.711 Allocate depth: 32 00:06:09.711 # threads/core: 1 00:06:09.711 Run time: 1 seconds 00:06:09.711 Verify: Yes 00:06:09.711 00:06:09.711 Running for 1 seconds... 00:06:09.711 00:06:09.711 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:09.711 ------------------------------------------------------------------------------------ 00:06:09.711 0,0 244352/s 1909 MiB/s 0 0 00:06:09.711 ==================================================================================== 00:06:09.711 Total 244352/s 954 MiB/s 0 0' 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:09.711 07:25:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:09.711 07:25:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.711 07:25:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.711 07:25:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.711 07:25:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.711 07:25:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.711 07:25:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.711 07:25:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.711 07:25:13 -- accel/accel.sh@42 -- # jq -r . 00:06:09.711 [2024-10-07 07:25:13.255914] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:09.711 [2024-10-07 07:25:13.255962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3956804 ] 00:06:09.711 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.711 [2024-10-07 07:25:13.310207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.711 [2024-10-07 07:25:13.376069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val= 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val= 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val=0x1 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val= 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val= 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val=0 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val= 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val=software 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@23 -- # accel_module=software 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val=32 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val=32 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val=1 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val=Yes 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val= 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:09.711 07:25:13 -- accel/accel.sh@21 -- # val= 00:06:09.711 07:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # IFS=: 00:06:09.711 07:25:13 -- accel/accel.sh@20 -- # read -r var val 00:06:10.650 07:25:14 -- accel/accel.sh@21 -- # val= 00:06:10.650 07:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.650 07:25:14 -- accel/accel.sh@20 -- # IFS=: 00:06:10.650 07:25:14 -- accel/accel.sh@20 -- # read -r var val 00:06:10.650 07:25:14 -- accel/accel.sh@21 -- # val= 00:06:10.650 07:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.650 07:25:14 -- accel/accel.sh@20 -- # IFS=: 00:06:10.650 07:25:14 -- accel/accel.sh@20 -- # read -r var val 00:06:10.650 07:25:14 -- accel/accel.sh@21 -- # val= 00:06:10.650 07:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.650 07:25:14 -- accel/accel.sh@20 -- # IFS=: 00:06:10.650 07:25:14 -- accel/accel.sh@20 -- # read -r var val 00:06:10.650 07:25:14 -- accel/accel.sh@21 -- # val= 00:06:10.650 07:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.650 07:25:14 -- accel/accel.sh@20 -- # IFS=: 00:06:10.650 07:25:14 -- accel/accel.sh@20 -- # read -r var val 00:06:10.651 07:25:14 -- accel/accel.sh@21 -- # val= 00:06:10.651 07:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.651 07:25:14 -- accel/accel.sh@20 -- # IFS=: 00:06:10.651 07:25:14 -- accel/accel.sh@20 -- # read -r var val 00:06:10.651 07:25:14 -- accel/accel.sh@21 -- # val= 00:06:10.651 07:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.651 07:25:14 -- accel/accel.sh@20 -- # IFS=: 00:06:10.651 07:25:14 -- accel/accel.sh@20 -- # read -r var val 00:06:10.651 07:25:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:10.651 07:25:14 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:10.651 07:25:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.651 00:06:10.651 real 0m2.674s 00:06:10.651 user 0m2.459s 00:06:10.651 sys 0m0.212s 00:06:10.651 07:25:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.651 07:25:14 -- common/autotest_common.sh@10 -- # set +x 00:06:10.651 ************************************ 00:06:10.651 END TEST accel_copy_crc32c_C2 00:06:10.651 ************************************ 00:06:10.651 07:25:14 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:10.651 07:25:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:10.651 07:25:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.651 07:25:14 -- common/autotest_common.sh@10 -- # set +x 00:06:10.651 ************************************ 00:06:10.651 START TEST accel_dualcast 00:06:10.651 ************************************ 00:06:10.651 07:25:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:10.651 07:25:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.651 07:25:14 -- accel/accel.sh@17 -- # local accel_module 00:06:10.651 07:25:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:10.651 07:25:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:10.651 07:25:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.651 07:25:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.651 07:25:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.651 07:25:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.651 07:25:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.651 07:25:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.651 07:25:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.651 07:25:14 -- accel/accel.sh@42 -- # jq -r . 00:06:10.910 [2024-10-07 07:25:14.629228] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:10.910 [2024-10-07 07:25:14.629305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3957045 ] 00:06:10.910 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.910 [2024-10-07 07:25:14.685073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.910 [2024-10-07 07:25:14.755075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.291 07:25:15 -- accel/accel.sh@18 -- # out=' 00:06:12.291 SPDK Configuration: 00:06:12.291 Core mask: 0x1 00:06:12.291 00:06:12.291 Accel Perf Configuration: 00:06:12.291 Workload Type: dualcast 00:06:12.291 Transfer size: 4096 bytes 00:06:12.291 Vector count 1 00:06:12.291 Module: software 00:06:12.291 Queue depth: 32 00:06:12.291 Allocate depth: 32 00:06:12.291 # threads/core: 1 00:06:12.291 Run time: 1 seconds 00:06:12.291 Verify: Yes 00:06:12.291 00:06:12.291 Running for 1 seconds... 00:06:12.291 00:06:12.291 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.291 ------------------------------------------------------------------------------------ 00:06:12.291 0,0 521376/s 2036 MiB/s 0 0 00:06:12.291 ==================================================================================== 00:06:12.291 Total 521376/s 2036 MiB/s 0 0' 00:06:12.291 07:25:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:12.291 07:25:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:12.291 07:25:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.291 07:25:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.291 07:25:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.291 07:25:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.291 07:25:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.291 07:25:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.291 07:25:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.291 07:25:15 -- accel/accel.sh@42 -- # jq -r . 00:06:12.291 [2024-10-07 07:25:15.966554] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:12.291 [2024-10-07 07:25:15.966605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3957279 ] 00:06:12.291 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.291 [2024-10-07 07:25:16.019766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.291 [2024-10-07 07:25:16.085711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val= 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val= 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val=0x1 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val= 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val= 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val=dualcast 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val= 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val=software 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@23 -- # accel_module=software 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val=32 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val=32 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val=1 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val=Yes 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val= 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:12.291 07:25:16 -- accel/accel.sh@21 -- # val= 00:06:12.291 07:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # IFS=: 00:06:12.291 07:25:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.670 07:25:17 -- accel/accel.sh@21 -- # val= 00:06:13.670 07:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.670 07:25:17 -- accel/accel.sh@20 -- # IFS=: 00:06:13.670 07:25:17 -- accel/accel.sh@20 -- # read -r var val 00:06:13.670 07:25:17 -- accel/accel.sh@21 -- # val= 00:06:13.670 07:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.670 07:25:17 -- accel/accel.sh@20 -- # IFS=: 00:06:13.670 07:25:17 -- accel/accel.sh@20 -- # read -r var val 00:06:13.670 07:25:17 -- accel/accel.sh@21 -- # val= 00:06:13.670 07:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.670 07:25:17 -- accel/accel.sh@20 -- # IFS=: 00:06:13.670 07:25:17 -- accel/accel.sh@20 -- # read -r var val 00:06:13.670 07:25:17 -- accel/accel.sh@21 -- # val= 00:06:13.670 07:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.670 07:25:17 -- accel/accel.sh@20 -- # IFS=: 00:06:13.670 07:25:17 -- accel/accel.sh@20 -- # read -r var val 00:06:13.670 07:25:17 -- accel/accel.sh@21 -- # val= 00:06:13.670 07:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.670 07:25:17 -- accel/accel.sh@20 -- # IFS=: 00:06:13.670 07:25:17 -- accel/accel.sh@20 -- # read -r var val 00:06:13.670 07:25:17 -- accel/accel.sh@21 -- # val= 00:06:13.671 07:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.671 07:25:17 -- accel/accel.sh@20 -- # IFS=: 00:06:13.671 07:25:17 -- accel/accel.sh@20 -- # read -r var val 00:06:13.671 07:25:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:13.671 07:25:17 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:13.671 07:25:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.671 00:06:13.671 real 0m2.678s 00:06:13.671 user 0m2.459s 00:06:13.671 sys 0m0.216s 00:06:13.671 07:25:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.671 07:25:17 -- common/autotest_common.sh@10 -- # set +x 00:06:13.671 ************************************ 00:06:13.671 END TEST accel_dualcast 00:06:13.671 ************************************ 00:06:13.671 07:25:17 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:13.671 07:25:17 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:13.671 07:25:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.671 07:25:17 -- common/autotest_common.sh@10 -- # set +x 00:06:13.671 ************************************ 00:06:13.671 START TEST accel_compare 00:06:13.671 ************************************ 00:06:13.671 07:25:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:13.671 07:25:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.671 07:25:17 -- accel/accel.sh@17 -- # local accel_module 00:06:13.671 07:25:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:13.671 07:25:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:13.671 07:25:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.671 07:25:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.671 07:25:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.671 07:25:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.671 07:25:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.671 07:25:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.671 07:25:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.671 07:25:17 -- accel/accel.sh@42 -- # jq -r . 00:06:13.671 [2024-10-07 07:25:17.338354] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:13.671 [2024-10-07 07:25:17.338411] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3957520 ] 00:06:13.671 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.671 [2024-10-07 07:25:17.393518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.671 [2024-10-07 07:25:17.461652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.049 07:25:18 -- accel/accel.sh@18 -- # out=' 00:06:15.049 SPDK Configuration: 00:06:15.049 Core mask: 0x1 00:06:15.049 00:06:15.049 Accel Perf Configuration: 00:06:15.049 Workload Type: compare 00:06:15.049 Transfer size: 4096 bytes 00:06:15.049 Vector count 1 00:06:15.049 Module: software 00:06:15.049 Queue depth: 32 00:06:15.049 Allocate depth: 32 00:06:15.049 # threads/core: 1 00:06:15.049 Run time: 1 seconds 00:06:15.049 Verify: Yes 00:06:15.049 00:06:15.049 Running for 1 seconds... 00:06:15.049 00:06:15.049 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.049 ------------------------------------------------------------------------------------ 00:06:15.049 0,0 630208/s 2461 MiB/s 0 0 00:06:15.049 ==================================================================================== 00:06:15.049 Total 630208/s 2461 MiB/s 0 0' 00:06:15.049 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.049 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.049 07:25:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:15.049 07:25:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:15.049 07:25:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.049 07:25:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.049 07:25:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.049 07:25:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.049 07:25:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.049 07:25:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.049 07:25:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.049 07:25:18 -- accel/accel.sh@42 -- # jq -r . 00:06:15.049 [2024-10-07 07:25:18.672694] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:15.049 [2024-10-07 07:25:18.672743] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3957751 ] 00:06:15.049 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.049 [2024-10-07 07:25:18.726876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.049 [2024-10-07 07:25:18.793013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.049 07:25:18 -- accel/accel.sh@21 -- # val= 00:06:15.049 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.049 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.049 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.049 07:25:18 -- accel/accel.sh@21 -- # val= 00:06:15.049 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.049 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.049 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.049 07:25:18 -- accel/accel.sh@21 -- # val=0x1 00:06:15.049 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.049 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.049 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.049 07:25:18 -- accel/accel.sh@21 -- # val= 00:06:15.049 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.049 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.049 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.049 07:25:18 -- accel/accel.sh@21 -- # val= 00:06:15.049 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.049 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.049 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.049 07:25:18 -- accel/accel.sh@21 -- # val=compare 00:06:15.049 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.049 07:25:18 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:15.049 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.050 07:25:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.050 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.050 07:25:18 -- accel/accel.sh@21 -- # val= 00:06:15.050 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.050 07:25:18 -- accel/accel.sh@21 -- # val=software 00:06:15.050 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.050 07:25:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.050 07:25:18 -- accel/accel.sh@21 -- # val=32 00:06:15.050 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.050 07:25:18 -- accel/accel.sh@21 -- # val=32 00:06:15.050 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.050 07:25:18 -- accel/accel.sh@21 -- # val=1 00:06:15.050 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.050 07:25:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:15.050 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.050 07:25:18 -- accel/accel.sh@21 -- # val=Yes 00:06:15.050 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.050 07:25:18 -- accel/accel.sh@21 -- # val= 00:06:15.050 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:15.050 07:25:18 -- accel/accel.sh@21 -- # val= 00:06:15.050 07:25:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # IFS=: 00:06:15.050 07:25:18 -- accel/accel.sh@20 -- # read -r var val 00:06:16.427 07:25:19 -- accel/accel.sh@21 -- # val= 00:06:16.427 07:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.427 07:25:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.427 07:25:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.427 07:25:19 -- accel/accel.sh@21 -- # val= 00:06:16.427 07:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.427 07:25:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.427 07:25:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.427 07:25:19 -- accel/accel.sh@21 -- # val= 00:06:16.427 07:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.427 07:25:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.427 07:25:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.427 07:25:19 -- accel/accel.sh@21 -- # val= 00:06:16.427 07:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.427 07:25:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.427 07:25:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.427 07:25:19 -- accel/accel.sh@21 -- # val= 00:06:16.427 07:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.427 07:25:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.427 07:25:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.427 07:25:19 -- accel/accel.sh@21 -- # val= 00:06:16.427 07:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.427 07:25:19 -- accel/accel.sh@20 -- # IFS=: 00:06:16.427 07:25:19 -- accel/accel.sh@20 -- # read -r var val 00:06:16.427 07:25:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:16.427 07:25:19 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:16.427 07:25:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.427 00:06:16.427 real 0m2.675s 00:06:16.427 user 0m2.457s 00:06:16.427 sys 0m0.215s 00:06:16.427 07:25:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.427 07:25:19 -- common/autotest_common.sh@10 -- # set +x 00:06:16.427 ************************************ 00:06:16.427 END TEST accel_compare 00:06:16.427 ************************************ 00:06:16.427 07:25:20 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:16.427 07:25:20 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:16.427 07:25:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.427 07:25:20 -- common/autotest_common.sh@10 -- # set +x 00:06:16.427 ************************************ 00:06:16.427 START TEST accel_xor 00:06:16.427 ************************************ 00:06:16.427 07:25:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:16.427 07:25:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.427 07:25:20 -- accel/accel.sh@17 -- # local accel_module 00:06:16.427 07:25:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:16.427 07:25:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:16.427 07:25:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.427 07:25:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.427 07:25:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.427 07:25:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.427 07:25:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.427 07:25:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.427 07:25:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.427 07:25:20 -- accel/accel.sh@42 -- # jq -r . 00:06:16.427 [2024-10-07 07:25:20.041973] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:16.427 [2024-10-07 07:25:20.042033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3957998 ] 00:06:16.427 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.427 [2024-10-07 07:25:20.099840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.427 [2024-10-07 07:25:20.168502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.806 07:25:21 -- accel/accel.sh@18 -- # out=' 00:06:17.806 SPDK Configuration: 00:06:17.806 Core mask: 0x1 00:06:17.806 00:06:17.806 Accel Perf Configuration: 00:06:17.806 Workload Type: xor 00:06:17.806 Source buffers: 2 00:06:17.806 Transfer size: 4096 bytes 00:06:17.806 Vector count 1 00:06:17.806 Module: software 00:06:17.806 Queue depth: 32 00:06:17.806 Allocate depth: 32 00:06:17.806 # threads/core: 1 00:06:17.806 Run time: 1 seconds 00:06:17.806 Verify: Yes 00:06:17.806 00:06:17.806 Running for 1 seconds... 00:06:17.806 00:06:17.806 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:17.806 ------------------------------------------------------------------------------------ 00:06:17.806 0,0 494464/s 1931 MiB/s 0 0 00:06:17.806 ==================================================================================== 00:06:17.806 Total 494464/s 1931 MiB/s 0 0' 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.806 07:25:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:17.806 07:25:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:17.806 07:25:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.806 07:25:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.806 07:25:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.806 07:25:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.806 07:25:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.806 07:25:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.806 07:25:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.806 07:25:21 -- accel/accel.sh@42 -- # jq -r . 00:06:17.806 [2024-10-07 07:25:21.389007] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:17.806 [2024-10-07 07:25:21.389072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3958224 ] 00:06:17.806 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.806 [2024-10-07 07:25:21.444615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.806 [2024-10-07 07:25:21.510339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.806 07:25:21 -- accel/accel.sh@21 -- # val= 00:06:17.806 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.806 07:25:21 -- accel/accel.sh@21 -- # val= 00:06:17.806 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.806 07:25:21 -- accel/accel.sh@21 -- # val=0x1 00:06:17.806 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.806 07:25:21 -- accel/accel.sh@21 -- # val= 00:06:17.806 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.806 07:25:21 -- accel/accel.sh@21 -- # val= 00:06:17.806 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.806 07:25:21 -- accel/accel.sh@21 -- # val=xor 00:06:17.806 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.806 07:25:21 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.806 07:25:21 -- accel/accel.sh@21 -- # val=2 00:06:17.806 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.806 07:25:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:17.806 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.806 07:25:21 -- accel/accel.sh@21 -- # val= 00:06:17.806 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.806 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.806 07:25:21 -- accel/accel.sh@21 -- # val=software 00:06:17.806 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.807 07:25:21 -- accel/accel.sh@23 -- # accel_module=software 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.807 07:25:21 -- accel/accel.sh@21 -- # val=32 00:06:17.807 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.807 07:25:21 -- accel/accel.sh@21 -- # val=32 00:06:17.807 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.807 07:25:21 -- accel/accel.sh@21 -- # val=1 00:06:17.807 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.807 07:25:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:17.807 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.807 07:25:21 -- accel/accel.sh@21 -- # val=Yes 00:06:17.807 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.807 07:25:21 -- accel/accel.sh@21 -- # val= 00:06:17.807 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:17.807 07:25:21 -- accel/accel.sh@21 -- # val= 00:06:17.807 07:25:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # IFS=: 00:06:17.807 07:25:21 -- accel/accel.sh@20 -- # read -r var val 00:06:18.744 07:25:22 -- accel/accel.sh@21 -- # val= 00:06:18.744 07:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.744 07:25:22 -- accel/accel.sh@20 -- # IFS=: 00:06:18.744 07:25:22 -- accel/accel.sh@20 -- # read -r var val 00:06:18.744 07:25:22 -- accel/accel.sh@21 -- # val= 00:06:18.744 07:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.744 07:25:22 -- accel/accel.sh@20 -- # IFS=: 00:06:18.744 07:25:22 -- accel/accel.sh@20 -- # read -r var val 00:06:18.744 07:25:22 -- accel/accel.sh@21 -- # val= 00:06:18.744 07:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.744 07:25:22 -- accel/accel.sh@20 -- # IFS=: 00:06:18.744 07:25:22 -- accel/accel.sh@20 -- # read -r var val 00:06:18.744 07:25:22 -- accel/accel.sh@21 -- # val= 00:06:18.744 07:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.744 07:25:22 -- accel/accel.sh@20 -- # IFS=: 00:06:18.744 07:25:22 -- accel/accel.sh@20 -- # read -r var val 00:06:18.744 07:25:22 -- accel/accel.sh@21 -- # val= 00:06:18.744 07:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.744 07:25:22 -- accel/accel.sh@20 -- # IFS=: 00:06:18.744 07:25:22 -- accel/accel.sh@20 -- # read -r var val 00:06:18.744 07:25:22 -- accel/accel.sh@21 -- # val= 00:06:18.744 07:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.744 07:25:22 -- accel/accel.sh@20 -- # IFS=: 00:06:18.744 07:25:22 -- accel/accel.sh@20 -- # read -r var val 00:06:18.744 07:25:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:18.744 07:25:22 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:18.744 07:25:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.744 00:06:18.744 real 0m2.684s 00:06:18.744 user 0m2.466s 00:06:18.744 sys 0m0.215s 00:06:18.744 07:25:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.744 07:25:22 -- common/autotest_common.sh@10 -- # set +x 00:06:18.744 ************************************ 00:06:18.744 END TEST accel_xor 00:06:18.744 ************************************ 00:06:19.004 07:25:22 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:19.004 07:25:22 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:19.004 07:25:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.004 07:25:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.004 ************************************ 00:06:19.004 START TEST accel_xor 00:06:19.004 ************************************ 00:06:19.004 07:25:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:19.004 07:25:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.004 07:25:22 -- accel/accel.sh@17 -- # local accel_module 00:06:19.004 07:25:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:19.004 07:25:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:19.004 07:25:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.004 07:25:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.004 07:25:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.004 07:25:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.004 07:25:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.004 07:25:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.004 07:25:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.004 07:25:22 -- accel/accel.sh@42 -- # jq -r . 00:06:19.004 [2024-10-07 07:25:22.763855] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:19.004 [2024-10-07 07:25:22.763916] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3958467 ] 00:06:19.004 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.004 [2024-10-07 07:25:22.819279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.004 [2024-10-07 07:25:22.889248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.382 07:25:24 -- accel/accel.sh@18 -- # out=' 00:06:20.382 SPDK Configuration: 00:06:20.382 Core mask: 0x1 00:06:20.382 00:06:20.382 Accel Perf Configuration: 00:06:20.382 Workload Type: xor 00:06:20.382 Source buffers: 3 00:06:20.382 Transfer size: 4096 bytes 00:06:20.382 Vector count 1 00:06:20.382 Module: software 00:06:20.382 Queue depth: 32 00:06:20.382 Allocate depth: 32 00:06:20.382 # threads/core: 1 00:06:20.382 Run time: 1 seconds 00:06:20.382 Verify: Yes 00:06:20.382 00:06:20.382 Running for 1 seconds... 00:06:20.382 00:06:20.382 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.382 ------------------------------------------------------------------------------------ 00:06:20.382 0,0 456256/s 1782 MiB/s 0 0 00:06:20.382 ==================================================================================== 00:06:20.382 Total 456256/s 1782 MiB/s 0 0' 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:20.382 07:25:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:20.382 07:25:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.382 07:25:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.382 07:25:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.382 07:25:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.382 07:25:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.382 07:25:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.382 07:25:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.382 07:25:24 -- accel/accel.sh@42 -- # jq -r . 00:06:20.382 [2024-10-07 07:25:24.101369] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:20.382 [2024-10-07 07:25:24.101429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3958696 ] 00:06:20.382 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.382 [2024-10-07 07:25:24.156419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.382 [2024-10-07 07:25:24.222747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val= 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val= 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val=0x1 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val= 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val= 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val=xor 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val=3 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val= 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val=software 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val=32 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val=32 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val=1 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val=Yes 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val= 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:20.382 07:25:24 -- accel/accel.sh@21 -- # val= 00:06:20.382 07:25:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # IFS=: 00:06:20.382 07:25:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.761 07:25:25 -- accel/accel.sh@21 -- # val= 00:06:21.761 07:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.761 07:25:25 -- accel/accel.sh@20 -- # IFS=: 00:06:21.761 07:25:25 -- accel/accel.sh@20 -- # read -r var val 00:06:21.761 07:25:25 -- accel/accel.sh@21 -- # val= 00:06:21.761 07:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.761 07:25:25 -- accel/accel.sh@20 -- # IFS=: 00:06:21.761 07:25:25 -- accel/accel.sh@20 -- # read -r var val 00:06:21.761 07:25:25 -- accel/accel.sh@21 -- # val= 00:06:21.761 07:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.761 07:25:25 -- accel/accel.sh@20 -- # IFS=: 00:06:21.761 07:25:25 -- accel/accel.sh@20 -- # read -r var val 00:06:21.761 07:25:25 -- accel/accel.sh@21 -- # val= 00:06:21.761 07:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.761 07:25:25 -- accel/accel.sh@20 -- # IFS=: 00:06:21.761 07:25:25 -- accel/accel.sh@20 -- # read -r var val 00:06:21.761 07:25:25 -- accel/accel.sh@21 -- # val= 00:06:21.761 07:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.761 07:25:25 -- accel/accel.sh@20 -- # IFS=: 00:06:21.761 07:25:25 -- accel/accel.sh@20 -- # read -r var val 00:06:21.761 07:25:25 -- accel/accel.sh@21 -- # val= 00:06:21.761 07:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.761 07:25:25 -- accel/accel.sh@20 -- # IFS=: 00:06:21.761 07:25:25 -- accel/accel.sh@20 -- # read -r var val 00:06:21.761 07:25:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:21.761 07:25:25 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:21.761 07:25:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.761 00:06:21.761 real 0m2.680s 00:06:21.761 user 0m2.453s 00:06:21.761 sys 0m0.224s 00:06:21.761 07:25:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.761 07:25:25 -- common/autotest_common.sh@10 -- # set +x 00:06:21.761 ************************************ 00:06:21.761 END TEST accel_xor 00:06:21.761 ************************************ 00:06:21.761 07:25:25 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:21.761 07:25:25 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:21.761 07:25:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.762 07:25:25 -- common/autotest_common.sh@10 -- # set +x 00:06:21.762 ************************************ 00:06:21.762 START TEST accel_dif_verify 00:06:21.762 ************************************ 00:06:21.762 07:25:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:21.762 07:25:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.762 07:25:25 -- accel/accel.sh@17 -- # local accel_module 00:06:21.762 07:25:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:21.762 07:25:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:21.762 07:25:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.762 07:25:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.762 07:25:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.762 07:25:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.762 07:25:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.762 07:25:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.762 07:25:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.762 07:25:25 -- accel/accel.sh@42 -- # jq -r . 00:06:21.762 [2024-10-07 07:25:25.476373] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:21.762 [2024-10-07 07:25:25.476451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3958949 ] 00:06:21.762 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.762 [2024-10-07 07:25:25.532344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.762 [2024-10-07 07:25:25.599365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.141 07:25:26 -- accel/accel.sh@18 -- # out=' 00:06:23.141 SPDK Configuration: 00:06:23.141 Core mask: 0x1 00:06:23.141 00:06:23.141 Accel Perf Configuration: 00:06:23.141 Workload Type: dif_verify 00:06:23.141 Vector size: 4096 bytes 00:06:23.141 Transfer size: 4096 bytes 00:06:23.141 Block size: 512 bytes 00:06:23.141 Metadata size: 8 bytes 00:06:23.141 Vector count 1 00:06:23.141 Module: software 00:06:23.141 Queue depth: 32 00:06:23.141 Allocate depth: 32 00:06:23.141 # threads/core: 1 00:06:23.141 Run time: 1 seconds 00:06:23.141 Verify: No 00:06:23.141 00:06:23.141 Running for 1 seconds... 00:06:23.141 00:06:23.141 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.141 ------------------------------------------------------------------------------------ 00:06:23.141 0,0 135328/s 536 MiB/s 0 0 00:06:23.141 ==================================================================================== 00:06:23.141 Total 135328/s 528 MiB/s 0 0' 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:23.141 07:25:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:23.141 07:25:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.141 07:25:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.141 07:25:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.141 07:25:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.141 07:25:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.141 07:25:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.141 07:25:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.141 07:25:26 -- accel/accel.sh@42 -- # jq -r . 00:06:23.141 [2024-10-07 07:25:26.809371] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:23.141 [2024-10-07 07:25:26.809434] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3959175 ] 00:06:23.141 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.141 [2024-10-07 07:25:26.863907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.141 [2024-10-07 07:25:26.930513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val= 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val= 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val=0x1 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val= 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val= 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val=dif_verify 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val= 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val=software 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val=32 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val=32 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val=1 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val=No 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val= 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.141 07:25:26 -- accel/accel.sh@21 -- # val= 00:06:23.141 07:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.141 07:25:26 -- accel/accel.sh@20 -- # read -r var val 00:06:24.522 07:25:28 -- accel/accel.sh@21 -- # val= 00:06:24.522 07:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.522 07:25:28 -- accel/accel.sh@20 -- # IFS=: 00:06:24.522 07:25:28 -- accel/accel.sh@20 -- # read -r var val 00:06:24.522 07:25:28 -- accel/accel.sh@21 -- # val= 00:06:24.522 07:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.522 07:25:28 -- accel/accel.sh@20 -- # IFS=: 00:06:24.522 07:25:28 -- accel/accel.sh@20 -- # read -r var val 00:06:24.522 07:25:28 -- accel/accel.sh@21 -- # val= 00:06:24.522 07:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.522 07:25:28 -- accel/accel.sh@20 -- # IFS=: 00:06:24.522 07:25:28 -- accel/accel.sh@20 -- # read -r var val 00:06:24.522 07:25:28 -- accel/accel.sh@21 -- # val= 00:06:24.522 07:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.522 07:25:28 -- accel/accel.sh@20 -- # IFS=: 00:06:24.522 07:25:28 -- accel/accel.sh@20 -- # read -r var val 00:06:24.522 07:25:28 -- accel/accel.sh@21 -- # val= 00:06:24.522 07:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.522 07:25:28 -- accel/accel.sh@20 -- # IFS=: 00:06:24.522 07:25:28 -- accel/accel.sh@20 -- # read -r var val 00:06:24.522 07:25:28 -- accel/accel.sh@21 -- # val= 00:06:24.522 07:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.523 07:25:28 -- accel/accel.sh@20 -- # IFS=: 00:06:24.523 07:25:28 -- accel/accel.sh@20 -- # read -r var val 00:06:24.523 07:25:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:24.523 07:25:28 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:24.523 07:25:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.523 00:06:24.523 real 0m2.675s 00:06:24.523 user 0m2.463s 00:06:24.523 sys 0m0.209s 00:06:24.523 07:25:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.523 07:25:28 -- common/autotest_common.sh@10 -- # set +x 00:06:24.523 ************************************ 00:06:24.523 END TEST accel_dif_verify 00:06:24.523 ************************************ 00:06:24.523 07:25:28 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:24.523 07:25:28 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:24.523 07:25:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.523 07:25:28 -- common/autotest_common.sh@10 -- # set +x 00:06:24.523 ************************************ 00:06:24.523 START TEST accel_dif_generate 00:06:24.523 ************************************ 00:06:24.523 07:25:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:24.523 07:25:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.523 07:25:28 -- accel/accel.sh@17 -- # local accel_module 00:06:24.523 07:25:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:24.523 07:25:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:24.523 07:25:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.523 07:25:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.523 07:25:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.523 07:25:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.523 07:25:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.523 07:25:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.523 07:25:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.523 07:25:28 -- accel/accel.sh@42 -- # jq -r . 00:06:24.523 [2024-10-07 07:25:28.183401] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:24.523 [2024-10-07 07:25:28.183478] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3959419 ] 00:06:24.523 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.523 [2024-10-07 07:25:28.239967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.523 [2024-10-07 07:25:28.307740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.901 07:25:29 -- accel/accel.sh@18 -- # out=' 00:06:25.901 SPDK Configuration: 00:06:25.901 Core mask: 0x1 00:06:25.901 00:06:25.901 Accel Perf Configuration: 00:06:25.901 Workload Type: dif_generate 00:06:25.901 Vector size: 4096 bytes 00:06:25.901 Transfer size: 4096 bytes 00:06:25.901 Block size: 512 bytes 00:06:25.901 Metadata size: 8 bytes 00:06:25.901 Vector count 1 00:06:25.901 Module: software 00:06:25.901 Queue depth: 32 00:06:25.901 Allocate depth: 32 00:06:25.901 # threads/core: 1 00:06:25.901 Run time: 1 seconds 00:06:25.901 Verify: No 00:06:25.901 00:06:25.901 Running for 1 seconds... 00:06:25.901 00:06:25.901 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.901 ------------------------------------------------------------------------------------ 00:06:25.901 0,0 162816/s 645 MiB/s 0 0 00:06:25.901 ==================================================================================== 00:06:25.901 Total 162816/s 636 MiB/s 0 0' 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:25.901 07:25:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:25.901 07:25:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.901 07:25:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.901 07:25:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.901 07:25:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.901 07:25:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.901 07:25:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.901 07:25:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.901 07:25:29 -- accel/accel.sh@42 -- # jq -r . 00:06:25.901 [2024-10-07 07:25:29.518634] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:25.901 [2024-10-07 07:25:29.518682] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3959654 ] 00:06:25.901 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.901 [2024-10-07 07:25:29.571841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.901 [2024-10-07 07:25:29.638086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val= 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val= 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val=0x1 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val= 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val= 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val=dif_generate 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val= 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val=software 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val=32 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val=32 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val=1 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val=No 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val= 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:25.901 07:25:29 -- accel/accel.sh@21 -- # val= 00:06:25.901 07:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # IFS=: 00:06:25.901 07:25:29 -- accel/accel.sh@20 -- # read -r var val 00:06:27.280 07:25:30 -- accel/accel.sh@21 -- # val= 00:06:27.280 07:25:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.280 07:25:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.280 07:25:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.280 07:25:30 -- accel/accel.sh@21 -- # val= 00:06:27.280 07:25:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.280 07:25:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.280 07:25:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.280 07:25:30 -- accel/accel.sh@21 -- # val= 00:06:27.280 07:25:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.280 07:25:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.280 07:25:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.280 07:25:30 -- accel/accel.sh@21 -- # val= 00:06:27.280 07:25:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.280 07:25:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.280 07:25:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.280 07:25:30 -- accel/accel.sh@21 -- # val= 00:06:27.280 07:25:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.280 07:25:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.280 07:25:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.280 07:25:30 -- accel/accel.sh@21 -- # val= 00:06:27.280 07:25:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.280 07:25:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.280 07:25:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.280 07:25:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.280 07:25:30 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:27.280 07:25:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.280 00:06:27.280 real 0m2.679s 00:06:27.280 user 0m2.464s 00:06:27.280 sys 0m0.213s 00:06:27.280 07:25:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.280 07:25:30 -- common/autotest_common.sh@10 -- # set +x 00:06:27.280 ************************************ 00:06:27.280 END TEST accel_dif_generate 00:06:27.280 ************************************ 00:06:27.280 07:25:30 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:27.280 07:25:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:27.280 07:25:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.280 07:25:30 -- common/autotest_common.sh@10 -- # set +x 00:06:27.281 ************************************ 00:06:27.281 START TEST accel_dif_generate_copy 00:06:27.281 ************************************ 00:06:27.281 07:25:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:27.281 07:25:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.281 07:25:30 -- accel/accel.sh@17 -- # local accel_module 00:06:27.281 07:25:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:27.281 07:25:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:27.281 07:25:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.281 07:25:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.281 07:25:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.281 07:25:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.281 07:25:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.281 07:25:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.281 07:25:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.281 07:25:30 -- accel/accel.sh@42 -- # jq -r . 00:06:27.281 [2024-10-07 07:25:30.890839] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:27.281 [2024-10-07 07:25:30.890896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3959897 ] 00:06:27.281 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.281 [2024-10-07 07:25:30.945814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.281 [2024-10-07 07:25:31.012780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.660 07:25:32 -- accel/accel.sh@18 -- # out=' 00:06:28.660 SPDK Configuration: 00:06:28.660 Core mask: 0x1 00:06:28.660 00:06:28.660 Accel Perf Configuration: 00:06:28.660 Workload Type: dif_generate_copy 00:06:28.660 Vector size: 4096 bytes 00:06:28.660 Transfer size: 4096 bytes 00:06:28.660 Vector count 1 00:06:28.660 Module: software 00:06:28.660 Queue depth: 32 00:06:28.660 Allocate depth: 32 00:06:28.660 # threads/core: 1 00:06:28.660 Run time: 1 seconds 00:06:28.660 Verify: No 00:06:28.660 00:06:28.660 Running for 1 seconds... 00:06:28.660 00:06:28.660 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:28.660 ------------------------------------------------------------------------------------ 00:06:28.660 0,0 125600/s 498 MiB/s 0 0 00:06:28.660 ==================================================================================== 00:06:28.660 Total 125600/s 490 MiB/s 0 0' 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:28.660 07:25:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:28.660 07:25:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.660 07:25:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.660 07:25:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.660 07:25:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.660 07:25:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.660 07:25:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.660 07:25:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.660 07:25:32 -- accel/accel.sh@42 -- # jq -r . 00:06:28.660 [2024-10-07 07:25:32.224114] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:28.660 [2024-10-07 07:25:32.224164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3960123 ] 00:06:28.660 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.660 [2024-10-07 07:25:32.277333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.660 [2024-10-07 07:25:32.345374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val= 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val= 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val=0x1 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val= 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val= 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val= 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val=software 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val=32 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val=32 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val=1 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val=No 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val= 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:28.660 07:25:32 -- accel/accel.sh@21 -- # val= 00:06:28.660 07:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # IFS=: 00:06:28.660 07:25:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.598 07:25:33 -- accel/accel.sh@21 -- # val= 00:06:29.598 07:25:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.598 07:25:33 -- accel/accel.sh@20 -- # IFS=: 00:06:29.598 07:25:33 -- accel/accel.sh@20 -- # read -r var val 00:06:29.598 07:25:33 -- accel/accel.sh@21 -- # val= 00:06:29.598 07:25:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.598 07:25:33 -- accel/accel.sh@20 -- # IFS=: 00:06:29.598 07:25:33 -- accel/accel.sh@20 -- # read -r var val 00:06:29.598 07:25:33 -- accel/accel.sh@21 -- # val= 00:06:29.598 07:25:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.598 07:25:33 -- accel/accel.sh@20 -- # IFS=: 00:06:29.598 07:25:33 -- accel/accel.sh@20 -- # read -r var val 00:06:29.598 07:25:33 -- accel/accel.sh@21 -- # val= 00:06:29.598 07:25:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.598 07:25:33 -- accel/accel.sh@20 -- # IFS=: 00:06:29.598 07:25:33 -- accel/accel.sh@20 -- # read -r var val 00:06:29.598 07:25:33 -- accel/accel.sh@21 -- # val= 00:06:29.598 07:25:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.598 07:25:33 -- accel/accel.sh@20 -- # IFS=: 00:06:29.598 07:25:33 -- accel/accel.sh@20 -- # read -r var val 00:06:29.598 07:25:33 -- accel/accel.sh@21 -- # val= 00:06:29.598 07:25:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.598 07:25:33 -- accel/accel.sh@20 -- # IFS=: 00:06:29.598 07:25:33 -- accel/accel.sh@20 -- # read -r var val 00:06:29.598 07:25:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.598 07:25:33 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:29.598 07:25:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.598 00:06:29.598 real 0m2.674s 00:06:29.598 user 0m2.453s 00:06:29.598 sys 0m0.218s 00:06:29.598 07:25:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.598 07:25:33 -- common/autotest_common.sh@10 -- # set +x 00:06:29.598 ************************************ 00:06:29.598 END TEST accel_dif_generate_copy 00:06:29.598 ************************************ 00:06:29.857 07:25:33 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:29.857 07:25:33 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.857 07:25:33 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:29.857 07:25:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.857 07:25:33 -- common/autotest_common.sh@10 -- # set +x 00:06:29.857 ************************************ 00:06:29.857 START TEST accel_comp 00:06:29.857 ************************************ 00:06:29.857 07:25:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.857 07:25:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.857 07:25:33 -- accel/accel.sh@17 -- # local accel_module 00:06:29.857 07:25:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.857 07:25:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.857 07:25:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.857 07:25:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.857 07:25:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.857 07:25:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.857 07:25:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.857 07:25:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.857 07:25:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.857 07:25:33 -- accel/accel.sh@42 -- # jq -r . 00:06:29.857 [2024-10-07 07:25:33.597198] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:29.858 [2024-10-07 07:25:33.597255] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3960372 ] 00:06:29.858 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.858 [2024-10-07 07:25:33.652336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.858 [2024-10-07 07:25:33.720394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.234 07:25:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:31.234 00:06:31.234 SPDK Configuration: 00:06:31.234 Core mask: 0x1 00:06:31.234 00:06:31.234 Accel Perf Configuration: 00:06:31.234 Workload Type: compress 00:06:31.234 Transfer size: 4096 bytes 00:06:31.234 Vector count 1 00:06:31.234 Module: software 00:06:31.234 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.234 Queue depth: 32 00:06:31.234 Allocate depth: 32 00:06:31.234 # threads/core: 1 00:06:31.234 Run time: 1 seconds 00:06:31.234 Verify: No 00:06:31.234 00:06:31.234 Running for 1 seconds... 00:06:31.234 00:06:31.234 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.234 ------------------------------------------------------------------------------------ 00:06:31.234 0,0 63200/s 263 MiB/s 0 0 00:06:31.234 ==================================================================================== 00:06:31.234 Total 63200/s 246 MiB/s 0 0' 00:06:31.234 07:25:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.234 07:25:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.234 07:25:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.235 07:25:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.235 07:25:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.235 07:25:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.235 07:25:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.235 07:25:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.235 07:25:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.235 07:25:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.235 07:25:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.235 07:25:34 -- accel/accel.sh@42 -- # jq -r . 00:06:31.235 [2024-10-07 07:25:34.945458] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:31.235 [2024-10-07 07:25:34.945537] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3960601 ] 00:06:31.235 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.235 [2024-10-07 07:25:35.002412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.235 [2024-10-07 07:25:35.068333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val= 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val= 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val= 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val=0x1 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val= 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val= 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val=compress 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val= 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val=software 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val=32 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val=32 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val=1 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val=No 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val= 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:31.235 07:25:35 -- accel/accel.sh@21 -- # val= 00:06:31.235 07:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # IFS=: 00:06:31.235 07:25:35 -- accel/accel.sh@20 -- # read -r var val 00:06:32.611 07:25:36 -- accel/accel.sh@21 -- # val= 00:06:32.611 07:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.611 07:25:36 -- accel/accel.sh@20 -- # IFS=: 00:06:32.611 07:25:36 -- accel/accel.sh@20 -- # read -r var val 00:06:32.611 07:25:36 -- accel/accel.sh@21 -- # val= 00:06:32.611 07:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.611 07:25:36 -- accel/accel.sh@20 -- # IFS=: 00:06:32.611 07:25:36 -- accel/accel.sh@20 -- # read -r var val 00:06:32.611 07:25:36 -- accel/accel.sh@21 -- # val= 00:06:32.611 07:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.611 07:25:36 -- accel/accel.sh@20 -- # IFS=: 00:06:32.611 07:25:36 -- accel/accel.sh@20 -- # read -r var val 00:06:32.611 07:25:36 -- accel/accel.sh@21 -- # val= 00:06:32.611 07:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.611 07:25:36 -- accel/accel.sh@20 -- # IFS=: 00:06:32.611 07:25:36 -- accel/accel.sh@20 -- # read -r var val 00:06:32.611 07:25:36 -- accel/accel.sh@21 -- # val= 00:06:32.611 07:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.611 07:25:36 -- accel/accel.sh@20 -- # IFS=: 00:06:32.611 07:25:36 -- accel/accel.sh@20 -- # read -r var val 00:06:32.611 07:25:36 -- accel/accel.sh@21 -- # val= 00:06:32.611 07:25:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.611 07:25:36 -- accel/accel.sh@20 -- # IFS=: 00:06:32.611 07:25:36 -- accel/accel.sh@20 -- # read -r var val 00:06:32.611 07:25:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.611 07:25:36 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:32.611 07:25:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.611 00:06:32.611 real 0m2.693s 00:06:32.611 user 0m2.471s 00:06:32.611 sys 0m0.219s 00:06:32.611 07:25:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.611 07:25:36 -- common/autotest_common.sh@10 -- # set +x 00:06:32.611 ************************************ 00:06:32.611 END TEST accel_comp 00:06:32.611 ************************************ 00:06:32.611 07:25:36 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:32.611 07:25:36 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:32.611 07:25:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.611 07:25:36 -- common/autotest_common.sh@10 -- # set +x 00:06:32.612 ************************************ 00:06:32.612 START TEST accel_decomp 00:06:32.612 ************************************ 00:06:32.612 07:25:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:32.612 07:25:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.612 07:25:36 -- accel/accel.sh@17 -- # local accel_module 00:06:32.612 07:25:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:32.612 07:25:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:32.612 07:25:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.612 07:25:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.612 07:25:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.612 07:25:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.612 07:25:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.612 07:25:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.612 07:25:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.612 07:25:36 -- accel/accel.sh@42 -- # jq -r . 00:06:32.612 [2024-10-07 07:25:36.322760] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:32.612 [2024-10-07 07:25:36.322819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3960842 ] 00:06:32.612 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.612 [2024-10-07 07:25:36.377868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.612 [2024-10-07 07:25:36.447296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.987 07:25:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:33.987 00:06:33.987 SPDK Configuration: 00:06:33.987 Core mask: 0x1 00:06:33.987 00:06:33.987 Accel Perf Configuration: 00:06:33.987 Workload Type: decompress 00:06:33.987 Transfer size: 4096 bytes 00:06:33.987 Vector count 1 00:06:33.987 Module: software 00:06:33.987 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.987 Queue depth: 32 00:06:33.987 Allocate depth: 32 00:06:33.987 # threads/core: 1 00:06:33.987 Run time: 1 seconds 00:06:33.987 Verify: Yes 00:06:33.987 00:06:33.987 Running for 1 seconds... 00:06:33.987 00:06:33.987 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.987 ------------------------------------------------------------------------------------ 00:06:33.987 0,0 70016/s 129 MiB/s 0 0 00:06:33.987 ==================================================================================== 00:06:33.987 Total 70016/s 273 MiB/s 0 0' 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:33.987 07:25:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:33.987 07:25:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.987 07:25:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.987 07:25:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.987 07:25:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.987 07:25:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.987 07:25:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.987 07:25:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.987 07:25:37 -- accel/accel.sh@42 -- # jq -r . 00:06:33.987 [2024-10-07 07:25:37.668803] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:33.987 [2024-10-07 07:25:37.668862] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3961076 ] 00:06:33.987 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.987 [2024-10-07 07:25:37.723390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.987 [2024-10-07 07:25:37.792388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val= 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val= 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val= 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val=0x1 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val= 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val= 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val=decompress 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val= 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val=software 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@23 -- # accel_module=software 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val=32 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val=32 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val=1 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val=Yes 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val= 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:33.987 07:25:37 -- accel/accel.sh@21 -- # val= 00:06:33.987 07:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # IFS=: 00:06:33.987 07:25:37 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 07:25:38 -- accel/accel.sh@21 -- # val= 00:06:35.361 07:25:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 07:25:38 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 07:25:38 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 07:25:38 -- accel/accel.sh@21 -- # val= 00:06:35.361 07:25:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 07:25:38 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 07:25:38 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 07:25:38 -- accel/accel.sh@21 -- # val= 00:06:35.361 07:25:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 07:25:38 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 07:25:38 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 07:25:38 -- accel/accel.sh@21 -- # val= 00:06:35.361 07:25:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 07:25:38 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 07:25:38 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 07:25:38 -- accel/accel.sh@21 -- # val= 00:06:35.361 07:25:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 07:25:38 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 07:25:38 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 07:25:38 -- accel/accel.sh@21 -- # val= 00:06:35.361 07:25:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 07:25:38 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 07:25:38 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 07:25:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.361 07:25:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:35.361 07:25:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.361 00:06:35.361 real 0m2.695s 00:06:35.361 user 0m2.463s 00:06:35.361 sys 0m0.231s 00:06:35.361 07:25:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.361 07:25:38 -- common/autotest_common.sh@10 -- # set +x 00:06:35.361 ************************************ 00:06:35.361 END TEST accel_decomp 00:06:35.361 ************************************ 00:06:35.361 07:25:39 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:35.361 07:25:39 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:35.361 07:25:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.361 07:25:39 -- common/autotest_common.sh@10 -- # set +x 00:06:35.361 ************************************ 00:06:35.361 START TEST accel_decmop_full 00:06:35.361 ************************************ 00:06:35.361 07:25:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:35.361 07:25:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.361 07:25:39 -- accel/accel.sh@17 -- # local accel_module 00:06:35.361 07:25:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:35.361 07:25:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:35.361 07:25:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.361 07:25:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.361 07:25:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.361 07:25:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.361 07:25:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.361 07:25:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.361 07:25:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.361 07:25:39 -- accel/accel.sh@42 -- # jq -r . 00:06:35.362 [2024-10-07 07:25:39.052919] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:35.362 [2024-10-07 07:25:39.052996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3961317 ] 00:06:35.362 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.362 [2024-10-07 07:25:39.110284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.362 [2024-10-07 07:25:39.184958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.738 07:25:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:36.738 00:06:36.738 SPDK Configuration: 00:06:36.738 Core mask: 0x1 00:06:36.738 00:06:36.738 Accel Perf Configuration: 00:06:36.738 Workload Type: decompress 00:06:36.738 Transfer size: 111250 bytes 00:06:36.738 Vector count 1 00:06:36.738 Module: software 00:06:36.738 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.738 Queue depth: 32 00:06:36.738 Allocate depth: 32 00:06:36.738 # threads/core: 1 00:06:36.738 Run time: 1 seconds 00:06:36.738 Verify: Yes 00:06:36.738 00:06:36.738 Running for 1 seconds... 00:06:36.738 00:06:36.738 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.738 ------------------------------------------------------------------------------------ 00:06:36.738 0,0 4768/s 196 MiB/s 0 0 00:06:36.738 ==================================================================================== 00:06:36.738 Total 4768/s 505 MiB/s 0 0' 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.738 07:25:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.738 07:25:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.738 07:25:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.738 07:25:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.738 07:25:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.738 07:25:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.738 07:25:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.738 07:25:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.738 07:25:40 -- accel/accel.sh@42 -- # jq -r . 00:06:36.738 [2024-10-07 07:25:40.410012] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:36.738 [2024-10-07 07:25:40.410099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3961553 ] 00:06:36.738 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.738 [2024-10-07 07:25:40.464863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.738 [2024-10-07 07:25:40.535591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val= 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val= 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val= 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val=0x1 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val= 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val= 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val=decompress 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val= 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val=software 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val=32 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val=32 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val=1 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val=Yes 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val= 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:36.738 07:25:40 -- accel/accel.sh@21 -- # val= 00:06:36.738 07:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.738 07:25:40 -- accel/accel.sh@20 -- # IFS=: 00:06:36.739 07:25:40 -- accel/accel.sh@20 -- # read -r var val 00:06:38.116 07:25:41 -- accel/accel.sh@21 -- # val= 00:06:38.116 07:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.116 07:25:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.116 07:25:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.116 07:25:41 -- accel/accel.sh@21 -- # val= 00:06:38.116 07:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.116 07:25:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.116 07:25:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.116 07:25:41 -- accel/accel.sh@21 -- # val= 00:06:38.116 07:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.116 07:25:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.116 07:25:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.116 07:25:41 -- accel/accel.sh@21 -- # val= 00:06:38.116 07:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.116 07:25:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.116 07:25:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.116 07:25:41 -- accel/accel.sh@21 -- # val= 00:06:38.116 07:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.116 07:25:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.116 07:25:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.116 07:25:41 -- accel/accel.sh@21 -- # val= 00:06:38.116 07:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.116 07:25:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.116 07:25:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.116 07:25:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.116 07:25:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:38.116 07:25:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.116 00:06:38.116 real 0m2.714s 00:06:38.116 user 0m2.487s 00:06:38.116 sys 0m0.223s 00:06:38.116 07:25:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.116 07:25:41 -- common/autotest_common.sh@10 -- # set +x 00:06:38.116 ************************************ 00:06:38.116 END TEST accel_decmop_full 00:06:38.116 ************************************ 00:06:38.116 07:25:41 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.116 07:25:41 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:38.116 07:25:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.116 07:25:41 -- common/autotest_common.sh@10 -- # set +x 00:06:38.116 ************************************ 00:06:38.116 START TEST accel_decomp_mcore 00:06:38.116 ************************************ 00:06:38.116 07:25:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.116 07:25:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.116 07:25:41 -- accel/accel.sh@17 -- # local accel_module 00:06:38.116 07:25:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.116 07:25:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.116 07:25:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.116 07:25:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.116 07:25:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.116 07:25:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.116 07:25:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.116 07:25:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.116 07:25:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.116 07:25:41 -- accel/accel.sh@42 -- # jq -r . 00:06:38.116 [2024-10-07 07:25:41.796750] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:38.117 [2024-10-07 07:25:41.796807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3961800 ] 00:06:38.117 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.117 [2024-10-07 07:25:41.853367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.117 [2024-10-07 07:25:41.924151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.117 [2024-10-07 07:25:41.924187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.117 [2024-10-07 07:25:41.924281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.117 [2024-10-07 07:25:41.924283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.496 07:25:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:39.496 00:06:39.496 SPDK Configuration: 00:06:39.496 Core mask: 0xf 00:06:39.496 00:06:39.496 Accel Perf Configuration: 00:06:39.496 Workload Type: decompress 00:06:39.496 Transfer size: 4096 bytes 00:06:39.496 Vector count 1 00:06:39.496 Module: software 00:06:39.496 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.496 Queue depth: 32 00:06:39.496 Allocate depth: 32 00:06:39.496 # threads/core: 1 00:06:39.496 Run time: 1 seconds 00:06:39.496 Verify: Yes 00:06:39.496 00:06:39.496 Running for 1 seconds... 00:06:39.496 00:06:39.496 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.496 ------------------------------------------------------------------------------------ 00:06:39.496 0,0 60896/s 112 MiB/s 0 0 00:06:39.496 3,0 62496/s 115 MiB/s 0 0 00:06:39.496 2,0 62432/s 115 MiB/s 0 0 00:06:39.496 1,0 62496/s 115 MiB/s 0 0 00:06:39.496 ==================================================================================== 00:06:39.496 Total 248320/s 970 MiB/s 0 0' 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.496 07:25:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:39.496 07:25:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:39.496 07:25:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.496 07:25:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.496 07:25:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.496 07:25:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.496 07:25:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.496 07:25:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.496 07:25:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.496 07:25:43 -- accel/accel.sh@42 -- # jq -r . 00:06:39.496 [2024-10-07 07:25:43.155133] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:39.496 [2024-10-07 07:25:43.155208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3962032 ] 00:06:39.496 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.496 [2024-10-07 07:25:43.215031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.496 [2024-10-07 07:25:43.284629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.496 [2024-10-07 07:25:43.284728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.496 [2024-10-07 07:25:43.284836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.496 [2024-10-07 07:25:43.284838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.496 07:25:43 -- accel/accel.sh@21 -- # val= 00:06:39.496 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.496 07:25:43 -- accel/accel.sh@21 -- # val= 00:06:39.496 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.496 07:25:43 -- accel/accel.sh@21 -- # val= 00:06:39.496 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.496 07:25:43 -- accel/accel.sh@21 -- # val=0xf 00:06:39.496 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.496 07:25:43 -- accel/accel.sh@21 -- # val= 00:06:39.496 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.496 07:25:43 -- accel/accel.sh@21 -- # val= 00:06:39.496 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.496 07:25:43 -- accel/accel.sh@21 -- # val=decompress 00:06:39.496 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.496 07:25:43 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.496 07:25:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.496 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.496 07:25:43 -- accel/accel.sh@21 -- # val= 00:06:39.496 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.496 07:25:43 -- accel/accel.sh@21 -- # val=software 00:06:39.496 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.496 07:25:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.496 07:25:43 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.496 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.496 07:25:43 -- accel/accel.sh@21 -- # val=32 00:06:39.496 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.496 07:25:43 -- accel/accel.sh@21 -- # val=32 00:06:39.496 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.496 07:25:43 -- accel/accel.sh@21 -- # val=1 00:06:39.496 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.496 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.497 07:25:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.497 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.497 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.497 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.497 07:25:43 -- accel/accel.sh@21 -- # val=Yes 00:06:39.497 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.497 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.497 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.497 07:25:43 -- accel/accel.sh@21 -- # val= 00:06:39.497 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.497 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.497 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:39.497 07:25:43 -- accel/accel.sh@21 -- # val= 00:06:39.497 07:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.497 07:25:43 -- accel/accel.sh@20 -- # IFS=: 00:06:39.497 07:25:43 -- accel/accel.sh@20 -- # read -r var val 00:06:40.878 07:25:44 -- accel/accel.sh@21 -- # val= 00:06:40.878 07:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # IFS=: 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # read -r var val 00:06:40.878 07:25:44 -- accel/accel.sh@21 -- # val= 00:06:40.878 07:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # IFS=: 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # read -r var val 00:06:40.878 07:25:44 -- accel/accel.sh@21 -- # val= 00:06:40.878 07:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # IFS=: 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # read -r var val 00:06:40.878 07:25:44 -- accel/accel.sh@21 -- # val= 00:06:40.878 07:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # IFS=: 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # read -r var val 00:06:40.878 07:25:44 -- accel/accel.sh@21 -- # val= 00:06:40.878 07:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # IFS=: 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # read -r var val 00:06:40.878 07:25:44 -- accel/accel.sh@21 -- # val= 00:06:40.878 07:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # IFS=: 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # read -r var val 00:06:40.878 07:25:44 -- accel/accel.sh@21 -- # val= 00:06:40.878 07:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # IFS=: 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # read -r var val 00:06:40.878 07:25:44 -- accel/accel.sh@21 -- # val= 00:06:40.878 07:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # IFS=: 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # read -r var val 00:06:40.878 07:25:44 -- accel/accel.sh@21 -- # val= 00:06:40.878 07:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # IFS=: 00:06:40.878 07:25:44 -- accel/accel.sh@20 -- # read -r var val 00:06:40.878 07:25:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.878 07:25:44 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:40.878 07:25:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.878 00:06:40.878 real 0m2.725s 00:06:40.878 user 0m9.142s 00:06:40.878 sys 0m0.244s 00:06:40.878 07:25:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.878 07:25:44 -- common/autotest_common.sh@10 -- # set +x 00:06:40.878 ************************************ 00:06:40.878 END TEST accel_decomp_mcore 00:06:40.878 ************************************ 00:06:40.878 07:25:44 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:40.878 07:25:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:40.878 07:25:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.878 07:25:44 -- common/autotest_common.sh@10 -- # set +x 00:06:40.878 ************************************ 00:06:40.878 START TEST accel_decomp_full_mcore 00:06:40.878 ************************************ 00:06:40.878 07:25:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:40.878 07:25:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.878 07:25:44 -- accel/accel.sh@17 -- # local accel_module 00:06:40.878 07:25:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:40.878 07:25:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:40.878 07:25:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.878 07:25:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.878 07:25:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.878 07:25:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.878 07:25:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.878 07:25:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.878 07:25:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.878 07:25:44 -- accel/accel.sh@42 -- # jq -r . 00:06:40.878 [2024-10-07 07:25:44.563130] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:40.878 [2024-10-07 07:25:44.563190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3962283 ] 00:06:40.878 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.878 [2024-10-07 07:25:44.619813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.878 [2024-10-07 07:25:44.692829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.878 [2024-10-07 07:25:44.692928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.878 [2024-10-07 07:25:44.693014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.878 [2024-10-07 07:25:44.693031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.260 07:25:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:42.260 00:06:42.260 SPDK Configuration: 00:06:42.260 Core mask: 0xf 00:06:42.260 00:06:42.260 Accel Perf Configuration: 00:06:42.260 Workload Type: decompress 00:06:42.260 Transfer size: 111250 bytes 00:06:42.260 Vector count 1 00:06:42.260 Module: software 00:06:42.260 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.260 Queue depth: 32 00:06:42.260 Allocate depth: 32 00:06:42.260 # threads/core: 1 00:06:42.260 Run time: 1 seconds 00:06:42.260 Verify: Yes 00:06:42.260 00:06:42.260 Running for 1 seconds... 00:06:42.260 00:06:42.260 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.260 ------------------------------------------------------------------------------------ 00:06:42.260 0,0 4640/s 191 MiB/s 0 0 00:06:42.260 3,0 4800/s 198 MiB/s 0 0 00:06:42.260 2,0 4800/s 198 MiB/s 0 0 00:06:42.260 1,0 4800/s 198 MiB/s 0 0 00:06:42.260 ==================================================================================== 00:06:42.260 Total 19040/s 2020 MiB/s 0 0' 00:06:42.260 07:25:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.260 07:25:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.260 07:25:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.260 07:25:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.260 07:25:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.260 07:25:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.260 07:25:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.260 07:25:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.260 07:25:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.260 07:25:45 -- accel/accel.sh@42 -- # jq -r . 00:06:42.260 [2024-10-07 07:25:45.936905] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:42.260 [2024-10-07 07:25:45.936983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3962522 ] 00:06:42.260 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.260 [2024-10-07 07:25:45.992776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.260 [2024-10-07 07:25:46.063135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.260 [2024-10-07 07:25:46.063228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.260 [2024-10-07 07:25:46.063336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.260 [2024-10-07 07:25:46.063337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val= 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val= 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val= 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val=0xf 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val= 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val= 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val=decompress 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val= 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val=software 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val=32 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val=32 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val=1 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val=Yes 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val= 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:42.260 07:25:46 -- accel/accel.sh@21 -- # val= 00:06:42.260 07:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # IFS=: 00:06:42.260 07:25:46 -- accel/accel.sh@20 -- # read -r var val 00:06:43.640 07:25:47 -- accel/accel.sh@21 -- # val= 00:06:43.640 07:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.640 07:25:47 -- accel/accel.sh@20 -- # IFS=: 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # read -r var val 00:06:43.641 07:25:47 -- accel/accel.sh@21 -- # val= 00:06:43.641 07:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # IFS=: 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # read -r var val 00:06:43.641 07:25:47 -- accel/accel.sh@21 -- # val= 00:06:43.641 07:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # IFS=: 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # read -r var val 00:06:43.641 07:25:47 -- accel/accel.sh@21 -- # val= 00:06:43.641 07:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # IFS=: 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # read -r var val 00:06:43.641 07:25:47 -- accel/accel.sh@21 -- # val= 00:06:43.641 07:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # IFS=: 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # read -r var val 00:06:43.641 07:25:47 -- accel/accel.sh@21 -- # val= 00:06:43.641 07:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # IFS=: 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # read -r var val 00:06:43.641 07:25:47 -- accel/accel.sh@21 -- # val= 00:06:43.641 07:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # IFS=: 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # read -r var val 00:06:43.641 07:25:47 -- accel/accel.sh@21 -- # val= 00:06:43.641 07:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # IFS=: 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # read -r var val 00:06:43.641 07:25:47 -- accel/accel.sh@21 -- # val= 00:06:43.641 07:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # IFS=: 00:06:43.641 07:25:47 -- accel/accel.sh@20 -- # read -r var val 00:06:43.641 07:25:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.641 07:25:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:43.641 07:25:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.641 00:06:43.641 real 0m2.750s 00:06:43.641 user 0m9.239s 00:06:43.641 sys 0m0.243s 00:06:43.641 07:25:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.641 07:25:47 -- common/autotest_common.sh@10 -- # set +x 00:06:43.641 ************************************ 00:06:43.641 END TEST accel_decomp_full_mcore 00:06:43.641 ************************************ 00:06:43.641 07:25:47 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.641 07:25:47 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:43.641 07:25:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.641 07:25:47 -- common/autotest_common.sh@10 -- # set +x 00:06:43.641 ************************************ 00:06:43.641 START TEST accel_decomp_mthread 00:06:43.641 ************************************ 00:06:43.641 07:25:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.641 07:25:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.641 07:25:47 -- accel/accel.sh@17 -- # local accel_module 00:06:43.641 07:25:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.641 07:25:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.641 07:25:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.641 07:25:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.641 07:25:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.641 07:25:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.641 07:25:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.641 07:25:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.641 07:25:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.641 07:25:47 -- accel/accel.sh@42 -- # jq -r . 00:06:43.641 [2024-10-07 07:25:47.352464] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:43.641 [2024-10-07 07:25:47.352523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3962787 ] 00:06:43.641 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.641 [2024-10-07 07:25:47.407779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.641 [2024-10-07 07:25:47.476533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.023 07:25:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:45.023 00:06:45.023 SPDK Configuration: 00:06:45.023 Core mask: 0x1 00:06:45.023 00:06:45.023 Accel Perf Configuration: 00:06:45.023 Workload Type: decompress 00:06:45.023 Transfer size: 4096 bytes 00:06:45.023 Vector count 1 00:06:45.023 Module: software 00:06:45.023 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:45.023 Queue depth: 32 00:06:45.023 Allocate depth: 32 00:06:45.023 # threads/core: 2 00:06:45.023 Run time: 1 seconds 00:06:45.023 Verify: Yes 00:06:45.023 00:06:45.023 Running for 1 seconds... 00:06:45.023 00:06:45.023 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.023 ------------------------------------------------------------------------------------ 00:06:45.023 0,1 38240/s 70 MiB/s 0 0 00:06:45.023 0,0 38144/s 70 MiB/s 0 0 00:06:45.023 ==================================================================================== 00:06:45.023 Total 76384/s 298 MiB/s 0 0' 00:06:45.023 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.023 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.023 07:25:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.023 07:25:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.023 07:25:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.023 07:25:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.023 07:25:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.023 07:25:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.023 07:25:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.023 07:25:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.023 07:25:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.023 07:25:48 -- accel/accel.sh@42 -- # jq -r . 00:06:45.023 [2024-10-07 07:25:48.704980] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:45.023 [2024-10-07 07:25:48.705064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3963004 ] 00:06:45.023 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.023 [2024-10-07 07:25:48.761091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.024 [2024-10-07 07:25:48.828535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val= 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val= 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val= 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val=0x1 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val= 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val= 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val=decompress 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val= 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val=software 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val=32 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val=32 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val=2 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val=Yes 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val= 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:45.024 07:25:48 -- accel/accel.sh@21 -- # val= 00:06:45.024 07:25:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # IFS=: 00:06:45.024 07:25:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.404 07:25:50 -- accel/accel.sh@21 -- # val= 00:06:46.404 07:25:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.404 07:25:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.404 07:25:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.404 07:25:50 -- accel/accel.sh@21 -- # val= 00:06:46.404 07:25:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.404 07:25:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.404 07:25:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.404 07:25:50 -- accel/accel.sh@21 -- # val= 00:06:46.404 07:25:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.404 07:25:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.404 07:25:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.404 07:25:50 -- accel/accel.sh@21 -- # val= 00:06:46.404 07:25:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.404 07:25:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.404 07:25:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.404 07:25:50 -- accel/accel.sh@21 -- # val= 00:06:46.404 07:25:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.404 07:25:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.404 07:25:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.404 07:25:50 -- accel/accel.sh@21 -- # val= 00:06:46.404 07:25:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.404 07:25:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.404 07:25:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.404 07:25:50 -- accel/accel.sh@21 -- # val= 00:06:46.404 07:25:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.404 07:25:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.404 07:25:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.404 07:25:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.404 07:25:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:46.404 07:25:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.404 00:06:46.404 real 0m2.709s 00:06:46.404 user 0m2.490s 00:06:46.404 sys 0m0.229s 00:06:46.404 07:25:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.404 07:25:50 -- common/autotest_common.sh@10 -- # set +x 00:06:46.404 ************************************ 00:06:46.404 END TEST accel_decomp_mthread 00:06:46.404 ************************************ 00:06:46.404 07:25:50 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.404 07:25:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:46.404 07:25:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.404 07:25:50 -- common/autotest_common.sh@10 -- # set +x 00:06:46.404 ************************************ 00:06:46.404 START TEST accel_deomp_full_mthread 00:06:46.404 ************************************ 00:06:46.404 07:25:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.404 07:25:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.404 07:25:50 -- accel/accel.sh@17 -- # local accel_module 00:06:46.404 07:25:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.404 07:25:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.405 07:25:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.405 07:25:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.405 07:25:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.405 07:25:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.405 07:25:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.405 07:25:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.405 07:25:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.405 07:25:50 -- accel/accel.sh@42 -- # jq -r . 00:06:46.405 [2024-10-07 07:25:50.099869] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:46.405 [2024-10-07 07:25:50.099928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3963271 ] 00:06:46.405 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.405 [2024-10-07 07:25:50.155532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.405 [2024-10-07 07:25:50.225046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.876 07:25:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:47.876 00:06:47.876 SPDK Configuration: 00:06:47.876 Core mask: 0x1 00:06:47.876 00:06:47.876 Accel Perf Configuration: 00:06:47.876 Workload Type: decompress 00:06:47.876 Transfer size: 111250 bytes 00:06:47.877 Vector count 1 00:06:47.877 Module: software 00:06:47.877 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.877 Queue depth: 32 00:06:47.877 Allocate depth: 32 00:06:47.877 # threads/core: 2 00:06:47.877 Run time: 1 seconds 00:06:47.877 Verify: Yes 00:06:47.877 00:06:47.877 Running for 1 seconds... 00:06:47.877 00:06:47.877 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.877 ------------------------------------------------------------------------------------ 00:06:47.877 0,1 2496/s 103 MiB/s 0 0 00:06:47.877 0,0 2464/s 101 MiB/s 0 0 00:06:47.877 ==================================================================================== 00:06:47.877 Total 4960/s 526 MiB/s 0 0' 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:47.877 07:25:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:47.877 07:25:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.877 07:25:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.877 07:25:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.877 07:25:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.877 07:25:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.877 07:25:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.877 07:25:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.877 07:25:51 -- accel/accel.sh@42 -- # jq -r . 00:06:47.877 [2024-10-07 07:25:51.470785] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:47.877 [2024-10-07 07:25:51.470846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3963521 ] 00:06:47.877 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.877 [2024-10-07 07:25:51.527145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.877 [2024-10-07 07:25:51.595270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val= 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val= 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val= 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val=0x1 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val= 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val= 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val=decompress 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val= 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val=software 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val=32 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val=32 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val=2 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val=Yes 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val= 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 07:25:51 -- accel/accel.sh@21 -- # val= 00:06:47.877 07:25:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 07:25:51 -- accel/accel.sh@20 -- # read -r var val 00:06:48.907 07:25:52 -- accel/accel.sh@21 -- # val= 00:06:48.907 07:25:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.907 07:25:52 -- accel/accel.sh@20 -- # IFS=: 00:06:48.907 07:25:52 -- accel/accel.sh@20 -- # read -r var val 00:06:48.907 07:25:52 -- accel/accel.sh@21 -- # val= 00:06:48.907 07:25:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.907 07:25:52 -- accel/accel.sh@20 -- # IFS=: 00:06:48.907 07:25:52 -- accel/accel.sh@20 -- # read -r var val 00:06:48.907 07:25:52 -- accel/accel.sh@21 -- # val= 00:06:48.907 07:25:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.907 07:25:52 -- accel/accel.sh@20 -- # IFS=: 00:06:48.907 07:25:52 -- accel/accel.sh@20 -- # read -r var val 00:06:48.907 07:25:52 -- accel/accel.sh@21 -- # val= 00:06:48.907 07:25:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.907 07:25:52 -- accel/accel.sh@20 -- # IFS=: 00:06:48.907 07:25:52 -- accel/accel.sh@20 -- # read -r var val 00:06:48.907 07:25:52 -- accel/accel.sh@21 -- # val= 00:06:48.907 07:25:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.907 07:25:52 -- accel/accel.sh@20 -- # IFS=: 00:06:48.907 07:25:52 -- accel/accel.sh@20 -- # read -r var val 00:06:48.907 07:25:52 -- accel/accel.sh@21 -- # val= 00:06:48.907 07:25:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.907 07:25:52 -- accel/accel.sh@20 -- # IFS=: 00:06:48.907 07:25:52 -- accel/accel.sh@20 -- # read -r var val 00:06:48.907 07:25:52 -- accel/accel.sh@21 -- # val= 00:06:48.907 07:25:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.907 07:25:52 -- accel/accel.sh@20 -- # IFS=: 00:06:48.907 07:25:52 -- accel/accel.sh@20 -- # read -r var val 00:06:48.907 07:25:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.907 07:25:52 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:48.907 07:25:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.907 00:06:48.907 real 0m2.745s 00:06:48.907 user 0m2.517s 00:06:48.907 sys 0m0.227s 00:06:48.907 07:25:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.907 07:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:48.907 ************************************ 00:06:48.907 END TEST accel_deomp_full_mthread 00:06:48.907 ************************************ 00:06:48.907 07:25:52 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:48.907 07:25:52 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:48.907 07:25:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:48.907 07:25:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.907 07:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:48.907 07:25:52 -- accel/accel.sh@129 -- # build_accel_config 00:06:48.907 07:25:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.907 07:25:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.907 07:25:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.907 07:25:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.907 07:25:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.907 07:25:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.907 07:25:52 -- accel/accel.sh@42 -- # jq -r . 00:06:48.907 ************************************ 00:06:48.907 START TEST accel_dif_functional_tests 00:06:48.907 ************************************ 00:06:48.907 07:25:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:49.167 [2024-10-07 07:25:52.891238] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:49.167 [2024-10-07 07:25:52.891284] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3963799 ] 00:06:49.167 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.167 [2024-10-07 07:25:52.944449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.167 [2024-10-07 07:25:53.014501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.167 [2024-10-07 07:25:53.014523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.167 [2024-10-07 07:25:53.014525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.167 00:06:49.167 00:06:49.167 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.167 http://cunit.sourceforge.net/ 00:06:49.167 00:06:49.167 00:06:49.167 Suite: accel_dif 00:06:49.167 Test: verify: DIF generated, GUARD check ...passed 00:06:49.167 Test: verify: DIF generated, APPTAG check ...passed 00:06:49.167 Test: verify: DIF generated, REFTAG check ...passed 00:06:49.167 Test: verify: DIF not generated, GUARD check ...[2024-10-07 07:25:53.081125] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:49.167 [2024-10-07 07:25:53.081172] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:49.167 passed 00:06:49.167 Test: verify: DIF not generated, APPTAG check ...[2024-10-07 07:25:53.081201] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:49.167 [2024-10-07 07:25:53.081216] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:49.167 passed 00:06:49.167 Test: verify: DIF not generated, REFTAG check ...[2024-10-07 07:25:53.081235] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:49.167 [2024-10-07 07:25:53.081248] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:49.167 passed 00:06:49.167 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:49.167 Test: verify: APPTAG incorrect, APPTAG check ...[2024-10-07 07:25:53.081286] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:49.167 passed 00:06:49.167 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:49.167 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:49.167 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:49.167 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-10-07 07:25:53.081379] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:49.167 passed 00:06:49.167 Test: generate copy: DIF generated, GUARD check ...passed 00:06:49.167 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:49.167 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:49.167 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:49.167 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:49.167 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:49.167 Test: generate copy: iovecs-len validate ...[2024-10-07 07:25:53.081539] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:49.167 passed 00:06:49.167 Test: generate copy: buffer alignment validate ...passed 00:06:49.167 00:06:49.167 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.167 suites 1 1 n/a 0 0 00:06:49.167 tests 20 20 20 0 0 00:06:49.167 asserts 204 204 204 0 n/a 00:06:49.167 00:06:49.167 Elapsed time = 0.002 seconds 00:06:49.427 00:06:49.427 real 0m0.419s 00:06:49.427 user 0m0.641s 00:06:49.427 sys 0m0.131s 00:06:49.427 07:25:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.427 07:25:53 -- common/autotest_common.sh@10 -- # set +x 00:06:49.427 ************************************ 00:06:49.427 END TEST accel_dif_functional_tests 00:06:49.427 ************************************ 00:06:49.427 00:06:49.427 real 0m57.307s 00:06:49.427 user 1m5.851s 00:06:49.427 sys 0m5.930s 00:06:49.427 07:25:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.427 07:25:53 -- common/autotest_common.sh@10 -- # set +x 00:06:49.427 ************************************ 00:06:49.427 END TEST accel 00:06:49.427 ************************************ 00:06:49.427 07:25:53 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:49.427 07:25:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:49.427 07:25:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.427 07:25:53 -- common/autotest_common.sh@10 -- # set +x 00:06:49.427 ************************************ 00:06:49.427 START TEST accel_rpc 00:06:49.427 ************************************ 00:06:49.427 07:25:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:49.687 * Looking for test storage... 00:06:49.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:49.688 07:25:53 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:49.688 07:25:53 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:49.688 07:25:53 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3963999 00:06:49.688 07:25:53 -- accel/accel_rpc.sh@15 -- # waitforlisten 3963999 00:06:49.688 07:25:53 -- common/autotest_common.sh@819 -- # '[' -z 3963999 ']' 00:06:49.688 07:25:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.688 07:25:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:49.688 07:25:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.688 07:25:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:49.688 07:25:53 -- common/autotest_common.sh@10 -- # set +x 00:06:49.688 [2024-10-07 07:25:53.457548] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:49.688 [2024-10-07 07:25:53.457596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3963999 ] 00:06:49.688 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.688 [2024-10-07 07:25:53.511244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.688 [2024-10-07 07:25:53.586054] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:49.688 [2024-10-07 07:25:53.586174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.625 07:25:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:50.625 07:25:54 -- common/autotest_common.sh@852 -- # return 0 00:06:50.625 07:25:54 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:50.625 07:25:54 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:50.625 07:25:54 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:50.625 07:25:54 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:50.625 07:25:54 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:50.625 07:25:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:50.625 07:25:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.625 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:06:50.625 ************************************ 00:06:50.625 START TEST accel_assign_opcode 00:06:50.625 ************************************ 00:06:50.625 07:25:54 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:06:50.625 07:25:54 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:50.625 07:25:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:50.625 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:06:50.625 [2024-10-07 07:25:54.276177] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:50.625 07:25:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:50.625 07:25:54 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:50.625 07:25:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:50.625 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:06:50.625 [2024-10-07 07:25:54.284192] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:50.625 07:25:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:50.625 07:25:54 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:50.625 07:25:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:50.625 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:06:50.625 07:25:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:50.625 07:25:54 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:50.625 07:25:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:50.625 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:06:50.625 07:25:54 -- accel/accel_rpc.sh@42 -- # grep software 00:06:50.625 07:25:54 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:50.625 07:25:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:50.625 software 00:06:50.625 00:06:50.625 real 0m0.239s 00:06:50.625 user 0m0.045s 00:06:50.625 sys 0m0.007s 00:06:50.625 07:25:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.625 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:06:50.625 ************************************ 00:06:50.625 END TEST accel_assign_opcode 00:06:50.625 ************************************ 00:06:50.625 07:25:54 -- accel/accel_rpc.sh@55 -- # killprocess 3963999 00:06:50.625 07:25:54 -- common/autotest_common.sh@926 -- # '[' -z 3963999 ']' 00:06:50.625 07:25:54 -- common/autotest_common.sh@930 -- # kill -0 3963999 00:06:50.625 07:25:54 -- common/autotest_common.sh@931 -- # uname 00:06:50.625 07:25:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:50.625 07:25:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3963999 00:06:50.885 07:25:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:50.885 07:25:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:50.885 07:25:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3963999' 00:06:50.885 killing process with pid 3963999 00:06:50.885 07:25:54 -- common/autotest_common.sh@945 -- # kill 3963999 00:06:50.885 07:25:54 -- common/autotest_common.sh@950 -- # wait 3963999 00:06:51.145 00:06:51.145 real 0m1.586s 00:06:51.145 user 0m1.670s 00:06:51.145 sys 0m0.384s 00:06:51.145 07:25:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.145 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:06:51.145 ************************************ 00:06:51.145 END TEST accel_rpc 00:06:51.145 ************************************ 00:06:51.145 07:25:54 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:51.145 07:25:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:51.145 07:25:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.145 07:25:54 -- common/autotest_common.sh@10 -- # set +x 00:06:51.145 ************************************ 00:06:51.145 START TEST app_cmdline 00:06:51.145 ************************************ 00:06:51.145 07:25:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:51.145 * Looking for test storage... 00:06:51.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:51.145 07:25:55 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:51.145 07:25:55 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3964300 00:06:51.145 07:25:55 -- app/cmdline.sh@18 -- # waitforlisten 3964300 00:06:51.145 07:25:55 -- common/autotest_common.sh@819 -- # '[' -z 3964300 ']' 00:06:51.145 07:25:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.145 07:25:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:51.145 07:25:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.145 07:25:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:51.145 07:25:55 -- common/autotest_common.sh@10 -- # set +x 00:06:51.145 07:25:55 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:51.145 [2024-10-07 07:25:55.093072] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:51.145 [2024-10-07 07:25:55.093124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3964300 ] 00:06:51.404 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.404 [2024-10-07 07:25:55.147563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.404 [2024-10-07 07:25:55.221390] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.404 [2024-10-07 07:25:55.221535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.972 07:25:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:51.972 07:25:55 -- common/autotest_common.sh@852 -- # return 0 00:06:51.972 07:25:55 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:52.231 { 00:06:52.231 "version": "SPDK v24.01.1-pre git sha1 726a04d70", 00:06:52.231 "fields": { 00:06:52.231 "major": 24, 00:06:52.231 "minor": 1, 00:06:52.232 "patch": 1, 00:06:52.232 "suffix": "-pre", 00:06:52.232 "commit": "726a04d70" 00:06:52.232 } 00:06:52.232 } 00:06:52.232 07:25:56 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:52.232 07:25:56 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:52.232 07:25:56 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:52.232 07:25:56 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:52.232 07:25:56 -- app/cmdline.sh@26 -- # sort 00:06:52.232 07:25:56 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:52.232 07:25:56 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:52.232 07:25:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.232 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:06:52.232 07:25:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.232 07:25:56 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:52.232 07:25:56 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:52.232 07:25:56 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.232 07:25:56 -- common/autotest_common.sh@640 -- # local es=0 00:06:52.232 07:25:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.232 07:25:56 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:52.232 07:25:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:52.232 07:25:56 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:52.232 07:25:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:52.232 07:25:56 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:52.232 07:25:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:52.232 07:25:56 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:52.232 07:25:56 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:52.232 07:25:56 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.491 request: 00:06:52.491 { 00:06:52.491 "method": "env_dpdk_get_mem_stats", 00:06:52.491 "req_id": 1 00:06:52.491 } 00:06:52.491 Got JSON-RPC error response 00:06:52.491 response: 00:06:52.491 { 00:06:52.491 "code": -32601, 00:06:52.491 "message": "Method not found" 00:06:52.491 } 00:06:52.491 07:25:56 -- common/autotest_common.sh@643 -- # es=1 00:06:52.491 07:25:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:52.491 07:25:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:52.491 07:25:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:52.491 07:25:56 -- app/cmdline.sh@1 -- # killprocess 3964300 00:06:52.491 07:25:56 -- common/autotest_common.sh@926 -- # '[' -z 3964300 ']' 00:06:52.491 07:25:56 -- common/autotest_common.sh@930 -- # kill -0 3964300 00:06:52.491 07:25:56 -- common/autotest_common.sh@931 -- # uname 00:06:52.491 07:25:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:52.491 07:25:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3964300 00:06:52.491 07:25:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:52.491 07:25:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:52.491 07:25:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3964300' 00:06:52.491 killing process with pid 3964300 00:06:52.491 07:25:56 -- common/autotest_common.sh@945 -- # kill 3964300 00:06:52.491 07:25:56 -- common/autotest_common.sh@950 -- # wait 3964300 00:06:52.749 00:06:52.749 real 0m1.706s 00:06:52.749 user 0m2.087s 00:06:52.749 sys 0m0.381s 00:06:52.749 07:25:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.749 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:06:52.749 ************************************ 00:06:52.749 END TEST app_cmdline 00:06:52.749 ************************************ 00:06:52.749 07:25:56 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:52.749 07:25:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:52.749 07:25:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.749 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:06:52.749 ************************************ 00:06:52.749 START TEST version 00:06:52.749 ************************************ 00:06:52.749 07:25:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:53.008 * Looking for test storage... 00:06:53.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:53.008 07:25:56 -- app/version.sh@17 -- # get_header_version major 00:06:53.008 07:25:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:53.008 07:25:56 -- app/version.sh@14 -- # cut -f2 00:06:53.008 07:25:56 -- app/version.sh@14 -- # tr -d '"' 00:06:53.008 07:25:56 -- app/version.sh@17 -- # major=24 00:06:53.008 07:25:56 -- app/version.sh@18 -- # get_header_version minor 00:06:53.008 07:25:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:53.008 07:25:56 -- app/version.sh@14 -- # tr -d '"' 00:06:53.008 07:25:56 -- app/version.sh@14 -- # cut -f2 00:06:53.008 07:25:56 -- app/version.sh@18 -- # minor=1 00:06:53.008 07:25:56 -- app/version.sh@19 -- # get_header_version patch 00:06:53.008 07:25:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:53.008 07:25:56 -- app/version.sh@14 -- # cut -f2 00:06:53.008 07:25:56 -- app/version.sh@14 -- # tr -d '"' 00:06:53.008 07:25:56 -- app/version.sh@19 -- # patch=1 00:06:53.008 07:25:56 -- app/version.sh@20 -- # get_header_version suffix 00:06:53.008 07:25:56 -- app/version.sh@14 -- # cut -f2 00:06:53.008 07:25:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:53.008 07:25:56 -- app/version.sh@14 -- # tr -d '"' 00:06:53.008 07:25:56 -- app/version.sh@20 -- # suffix=-pre 00:06:53.008 07:25:56 -- app/version.sh@22 -- # version=24.1 00:06:53.008 07:25:56 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:53.008 07:25:56 -- app/version.sh@25 -- # version=24.1.1 00:06:53.008 07:25:56 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:53.008 07:25:56 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:53.008 07:25:56 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:53.008 07:25:56 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:53.008 07:25:56 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:53.008 00:06:53.008 real 0m0.147s 00:06:53.008 user 0m0.076s 00:06:53.008 sys 0m0.101s 00:06:53.008 07:25:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.008 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:06:53.008 ************************************ 00:06:53.008 END TEST version 00:06:53.008 ************************************ 00:06:53.008 07:25:56 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:53.008 07:25:56 -- spdk/autotest.sh@204 -- # uname -s 00:06:53.008 07:25:56 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:53.008 07:25:56 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:53.008 07:25:56 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:53.008 07:25:56 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:53.008 07:25:56 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:53.008 07:25:56 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:53.008 07:25:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:53.008 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:06:53.008 07:25:56 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:53.008 07:25:56 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:53.008 07:25:56 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:53.008 07:25:56 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:53.008 07:25:56 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:53.008 07:25:56 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:53.008 07:25:56 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:53.008 07:25:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:53.008 07:25:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.008 07:25:56 -- common/autotest_common.sh@10 -- # set +x 00:06:53.008 ************************************ 00:06:53.008 START TEST nvmf_tcp 00:06:53.008 ************************************ 00:06:53.008 07:25:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:53.268 * Looking for test storage... 00:06:53.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:53.268 07:25:57 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:53.268 07:25:57 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:53.268 07:25:57 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.268 07:25:57 -- nvmf/common.sh@7 -- # uname -s 00:06:53.268 07:25:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.268 07:25:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.268 07:25:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.268 07:25:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.268 07:25:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.268 07:25:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.268 07:25:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.268 07:25:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.268 07:25:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.268 07:25:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.268 07:25:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:53.268 07:25:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:53.268 07:25:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.268 07:25:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.268 07:25:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.268 07:25:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.268 07:25:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.268 07:25:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.268 07:25:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.268 07:25:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.268 07:25:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.268 07:25:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.268 07:25:57 -- paths/export.sh@5 -- # export PATH 00:06:53.268 07:25:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.268 07:25:57 -- nvmf/common.sh@46 -- # : 0 00:06:53.268 07:25:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:53.268 07:25:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:53.268 07:25:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:53.268 07:25:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.268 07:25:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.268 07:25:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:53.268 07:25:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:53.268 07:25:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:53.268 07:25:57 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:53.268 07:25:57 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:53.268 07:25:57 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:53.268 07:25:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:53.268 07:25:57 -- common/autotest_common.sh@10 -- # set +x 00:06:53.268 07:25:57 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:53.268 07:25:57 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:53.268 07:25:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:53.268 07:25:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.268 07:25:57 -- common/autotest_common.sh@10 -- # set +x 00:06:53.268 ************************************ 00:06:53.268 START TEST nvmf_example 00:06:53.268 ************************************ 00:06:53.268 07:25:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:53.268 * Looking for test storage... 00:06:53.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.268 07:25:57 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.268 07:25:57 -- nvmf/common.sh@7 -- # uname -s 00:06:53.268 07:25:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.268 07:25:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.268 07:25:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.268 07:25:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.268 07:25:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.269 07:25:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.269 07:25:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.269 07:25:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.269 07:25:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.269 07:25:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.269 07:25:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:53.269 07:25:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:53.269 07:25:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.269 07:25:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.269 07:25:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.269 07:25:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.269 07:25:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.269 07:25:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.269 07:25:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.269 07:25:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.269 07:25:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.269 07:25:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.269 07:25:57 -- paths/export.sh@5 -- # export PATH 00:06:53.269 07:25:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.269 07:25:57 -- nvmf/common.sh@46 -- # : 0 00:06:53.269 07:25:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:53.269 07:25:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:53.269 07:25:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:53.269 07:25:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.269 07:25:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.269 07:25:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:53.269 07:25:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:53.269 07:25:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:53.269 07:25:57 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:53.269 07:25:57 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:53.269 07:25:57 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:53.269 07:25:57 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:53.269 07:25:57 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:53.269 07:25:57 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:53.269 07:25:57 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:53.269 07:25:57 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:53.269 07:25:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:53.269 07:25:57 -- common/autotest_common.sh@10 -- # set +x 00:06:53.269 07:25:57 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:53.269 07:25:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:53.269 07:25:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.269 07:25:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:53.269 07:25:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:53.269 07:25:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:53.269 07:25:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.269 07:25:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:53.269 07:25:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.269 07:25:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:53.269 07:25:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:53.269 07:25:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:53.269 07:25:57 -- common/autotest_common.sh@10 -- # set +x 00:06:58.548 07:26:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:58.548 07:26:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:58.548 07:26:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:58.548 07:26:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:58.548 07:26:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:58.548 07:26:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:58.548 07:26:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:58.548 07:26:02 -- nvmf/common.sh@294 -- # net_devs=() 00:06:58.548 07:26:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:58.548 07:26:02 -- nvmf/common.sh@295 -- # e810=() 00:06:58.548 07:26:02 -- nvmf/common.sh@295 -- # local -ga e810 00:06:58.548 07:26:02 -- nvmf/common.sh@296 -- # x722=() 00:06:58.548 07:26:02 -- nvmf/common.sh@296 -- # local -ga x722 00:06:58.548 07:26:02 -- nvmf/common.sh@297 -- # mlx=() 00:06:58.548 07:26:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:58.548 07:26:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.548 07:26:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.548 07:26:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.548 07:26:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.548 07:26:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.548 07:26:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.548 07:26:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.548 07:26:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.548 07:26:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.548 07:26:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.548 07:26:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.548 07:26:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:58.548 07:26:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:58.548 07:26:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:58.548 07:26:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:58.548 07:26:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:58.548 07:26:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:58.548 07:26:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:58.548 07:26:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:58.548 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:58.548 07:26:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:58.548 07:26:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:58.548 07:26:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.548 07:26:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.548 07:26:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:58.548 07:26:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:58.548 07:26:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:58.548 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:58.548 07:26:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:58.548 07:26:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:58.548 07:26:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.548 07:26:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.548 07:26:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:58.548 07:26:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:58.548 07:26:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:58.548 07:26:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:58.548 07:26:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:58.548 07:26:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.548 07:26:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:58.548 07:26:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.548 07:26:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:58.548 Found net devices under 0000:af:00.0: cvl_0_0 00:06:58.549 07:26:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.549 07:26:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:58.549 07:26:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.549 07:26:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:58.549 07:26:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.549 07:26:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:58.549 Found net devices under 0000:af:00.1: cvl_0_1 00:06:58.549 07:26:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.549 07:26:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:58.549 07:26:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:58.549 07:26:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:58.549 07:26:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:58.549 07:26:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:58.549 07:26:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.549 07:26:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.549 07:26:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.549 07:26:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:58.549 07:26:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.549 07:26:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.549 07:26:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:58.549 07:26:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.549 07:26:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.549 07:26:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:58.549 07:26:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:58.549 07:26:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.549 07:26:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.549 07:26:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.549 07:26:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.549 07:26:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:58.549 07:26:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.549 07:26:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.549 07:26:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.549 07:26:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:58.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:06:58.549 00:06:58.549 --- 10.0.0.2 ping statistics --- 00:06:58.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.549 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:06:58.808 07:26:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:06:58.808 00:06:58.808 --- 10.0.0.1 ping statistics --- 00:06:58.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.808 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:06:58.808 07:26:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.808 07:26:02 -- nvmf/common.sh@410 -- # return 0 00:06:58.808 07:26:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:58.808 07:26:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.808 07:26:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:58.808 07:26:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:58.808 07:26:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.808 07:26:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:58.808 07:26:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:58.808 07:26:02 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:58.808 07:26:02 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:58.808 07:26:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:58.808 07:26:02 -- common/autotest_common.sh@10 -- # set +x 00:06:58.808 07:26:02 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:58.808 07:26:02 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:58.808 07:26:02 -- target/nvmf_example.sh@34 -- # nvmfpid=3967785 00:06:58.808 07:26:02 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:58.808 07:26:02 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:58.808 07:26:02 -- target/nvmf_example.sh@36 -- # waitforlisten 3967785 00:06:58.808 07:26:02 -- common/autotest_common.sh@819 -- # '[' -z 3967785 ']' 00:06:58.808 07:26:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.808 07:26:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:58.808 07:26:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.808 07:26:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:58.808 07:26:02 -- common/autotest_common.sh@10 -- # set +x 00:06:58.808 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.745 07:26:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:59.745 07:26:03 -- common/autotest_common.sh@852 -- # return 0 00:06:59.745 07:26:03 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:59.745 07:26:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:59.745 07:26:03 -- common/autotest_common.sh@10 -- # set +x 00:06:59.745 07:26:03 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:59.746 07:26:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.746 07:26:03 -- common/autotest_common.sh@10 -- # set +x 00:06:59.746 07:26:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.746 07:26:03 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:59.746 07:26:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.746 07:26:03 -- common/autotest_common.sh@10 -- # set +x 00:06:59.746 07:26:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.746 07:26:03 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:59.746 07:26:03 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:59.746 07:26:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.746 07:26:03 -- common/autotest_common.sh@10 -- # set +x 00:06:59.746 07:26:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.746 07:26:03 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:59.746 07:26:03 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:59.746 07:26:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.746 07:26:03 -- common/autotest_common.sh@10 -- # set +x 00:06:59.746 07:26:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.746 07:26:03 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.746 07:26:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.746 07:26:03 -- common/autotest_common.sh@10 -- # set +x 00:06:59.746 07:26:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.746 07:26:03 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:59.746 07:26:03 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:59.746 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.959 Initializing NVMe Controllers 00:07:11.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:11.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:11.959 Initialization complete. Launching workers. 00:07:11.959 ======================================================== 00:07:11.959 Latency(us) 00:07:11.959 Device Information : IOPS MiB/s Average min max 00:07:11.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18617.00 72.72 3437.51 696.97 16271.46 00:07:11.959 ======================================================== 00:07:11.959 Total : 18617.00 72.72 3437.51 696.97 16271.46 00:07:11.959 00:07:11.959 07:26:13 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:11.959 07:26:13 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:11.959 07:26:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:11.959 07:26:13 -- nvmf/common.sh@116 -- # sync 00:07:11.959 07:26:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:11.959 07:26:13 -- nvmf/common.sh@119 -- # set +e 00:07:11.959 07:26:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:11.959 07:26:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:11.959 rmmod nvme_tcp 00:07:11.959 rmmod nvme_fabrics 00:07:11.959 rmmod nvme_keyring 00:07:11.959 07:26:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:11.959 07:26:13 -- nvmf/common.sh@123 -- # set -e 00:07:11.959 07:26:13 -- nvmf/common.sh@124 -- # return 0 00:07:11.959 07:26:13 -- nvmf/common.sh@477 -- # '[' -n 3967785 ']' 00:07:11.959 07:26:13 -- nvmf/common.sh@478 -- # killprocess 3967785 00:07:11.959 07:26:13 -- common/autotest_common.sh@926 -- # '[' -z 3967785 ']' 00:07:11.959 07:26:13 -- common/autotest_common.sh@930 -- # kill -0 3967785 00:07:11.959 07:26:13 -- common/autotest_common.sh@931 -- # uname 00:07:11.959 07:26:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:11.959 07:26:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3967785 00:07:11.959 07:26:13 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:11.959 07:26:13 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:11.959 07:26:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3967785' 00:07:11.959 killing process with pid 3967785 00:07:11.960 07:26:13 -- common/autotest_common.sh@945 -- # kill 3967785 00:07:11.960 07:26:13 -- common/autotest_common.sh@950 -- # wait 3967785 00:07:11.960 nvmf threads initialize successfully 00:07:11.960 bdev subsystem init successfully 00:07:11.960 created a nvmf target service 00:07:11.960 create targets's poll groups done 00:07:11.960 all subsystems of target started 00:07:11.960 nvmf target is running 00:07:11.960 all subsystems of target stopped 00:07:11.960 destroy targets's poll groups done 00:07:11.960 destroyed the nvmf target service 00:07:11.960 bdev subsystem finish successfully 00:07:11.960 nvmf threads destroy successfully 00:07:11.960 07:26:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:11.960 07:26:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:11.960 07:26:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:11.960 07:26:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:11.960 07:26:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:11.960 07:26:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.960 07:26:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.960 07:26:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.219 07:26:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:12.219 07:26:16 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:12.219 07:26:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:12.219 07:26:16 -- common/autotest_common.sh@10 -- # set +x 00:07:12.219 00:07:12.219 real 0m19.126s 00:07:12.219 user 0m45.860s 00:07:12.219 sys 0m5.565s 00:07:12.219 07:26:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.219 07:26:16 -- common/autotest_common.sh@10 -- # set +x 00:07:12.220 ************************************ 00:07:12.220 END TEST nvmf_example 00:07:12.220 ************************************ 00:07:12.480 07:26:16 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:12.480 07:26:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:12.480 07:26:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.480 07:26:16 -- common/autotest_common.sh@10 -- # set +x 00:07:12.480 ************************************ 00:07:12.480 START TEST nvmf_filesystem 00:07:12.480 ************************************ 00:07:12.480 07:26:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:12.480 * Looking for test storage... 00:07:12.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.480 07:26:16 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:12.480 07:26:16 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:12.480 07:26:16 -- common/autotest_common.sh@34 -- # set -e 00:07:12.480 07:26:16 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:12.480 07:26:16 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:12.480 07:26:16 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:12.480 07:26:16 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:12.480 07:26:16 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:12.480 07:26:16 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:12.480 07:26:16 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:12.480 07:26:16 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:12.480 07:26:16 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:12.480 07:26:16 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:12.480 07:26:16 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:12.480 07:26:16 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:12.480 07:26:16 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:12.480 07:26:16 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:12.480 07:26:16 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:12.480 07:26:16 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:12.480 07:26:16 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:12.480 07:26:16 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:12.480 07:26:16 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:12.480 07:26:16 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:12.480 07:26:16 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:12.480 07:26:16 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:12.480 07:26:16 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:12.480 07:26:16 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:12.480 07:26:16 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:12.480 07:26:16 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:12.480 07:26:16 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:12.480 07:26:16 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:12.480 07:26:16 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:12.480 07:26:16 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:12.480 07:26:16 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:12.480 07:26:16 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:12.480 07:26:16 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:12.480 07:26:16 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:12.480 07:26:16 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:12.480 07:26:16 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:12.480 07:26:16 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:12.480 07:26:16 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:12.480 07:26:16 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:12.480 07:26:16 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:12.480 07:26:16 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:12.480 07:26:16 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:12.480 07:26:16 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:12.480 07:26:16 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:12.480 07:26:16 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:12.480 07:26:16 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:12.480 07:26:16 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:12.480 07:26:16 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:12.480 07:26:16 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:12.480 07:26:16 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:12.480 07:26:16 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:12.480 07:26:16 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:12.480 07:26:16 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:12.480 07:26:16 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:12.480 07:26:16 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:12.480 07:26:16 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:12.480 07:26:16 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:12.480 07:26:16 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:12.480 07:26:16 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:12.480 07:26:16 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:12.480 07:26:16 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:12.480 07:26:16 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:12.480 07:26:16 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:12.480 07:26:16 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:12.480 07:26:16 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:12.480 07:26:16 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:12.480 07:26:16 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:12.480 07:26:16 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:12.480 07:26:16 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:12.480 07:26:16 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:12.480 07:26:16 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:12.480 07:26:16 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:12.480 07:26:16 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:12.480 07:26:16 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:12.480 07:26:16 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:12.480 07:26:16 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:12.480 07:26:16 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:12.480 07:26:16 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:12.480 07:26:16 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:12.480 07:26:16 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:12.480 07:26:16 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:12.480 07:26:16 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:12.480 07:26:16 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:12.480 07:26:16 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:12.480 07:26:16 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:12.480 07:26:16 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:12.480 07:26:16 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:12.480 07:26:16 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:12.480 07:26:16 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:12.480 07:26:16 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:12.480 07:26:16 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:12.480 07:26:16 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:12.480 07:26:16 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:12.480 07:26:16 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:12.480 07:26:16 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:12.480 07:26:16 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:12.480 07:26:16 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:12.480 07:26:16 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:12.480 07:26:16 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:12.480 #define SPDK_CONFIG_H 00:07:12.480 #define SPDK_CONFIG_APPS 1 00:07:12.480 #define SPDK_CONFIG_ARCH native 00:07:12.480 #undef SPDK_CONFIG_ASAN 00:07:12.480 #undef SPDK_CONFIG_AVAHI 00:07:12.480 #undef SPDK_CONFIG_CET 00:07:12.480 #define SPDK_CONFIG_COVERAGE 1 00:07:12.480 #define SPDK_CONFIG_CROSS_PREFIX 00:07:12.480 #undef SPDK_CONFIG_CRYPTO 00:07:12.480 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:12.480 #undef SPDK_CONFIG_CUSTOMOCF 00:07:12.480 #undef SPDK_CONFIG_DAOS 00:07:12.480 #define SPDK_CONFIG_DAOS_DIR 00:07:12.480 #define SPDK_CONFIG_DEBUG 1 00:07:12.480 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:12.480 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:12.480 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:12.480 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:12.480 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:12.480 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:12.480 #define SPDK_CONFIG_EXAMPLES 1 00:07:12.480 #undef SPDK_CONFIG_FC 00:07:12.480 #define SPDK_CONFIG_FC_PATH 00:07:12.480 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:12.480 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:12.480 #undef SPDK_CONFIG_FUSE 00:07:12.480 #undef SPDK_CONFIG_FUZZER 00:07:12.480 #define SPDK_CONFIG_FUZZER_LIB 00:07:12.480 #undef SPDK_CONFIG_GOLANG 00:07:12.480 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:12.480 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:12.480 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:12.480 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:12.480 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:12.480 #define SPDK_CONFIG_IDXD 1 00:07:12.480 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:12.480 #undef SPDK_CONFIG_IPSEC_MB 00:07:12.480 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:12.480 #define SPDK_CONFIG_ISAL 1 00:07:12.480 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:12.480 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:12.480 #define SPDK_CONFIG_LIBDIR 00:07:12.480 #undef SPDK_CONFIG_LTO 00:07:12.480 #define SPDK_CONFIG_MAX_LCORES 00:07:12.480 #define SPDK_CONFIG_NVME_CUSE 1 00:07:12.480 #undef SPDK_CONFIG_OCF 00:07:12.480 #define SPDK_CONFIG_OCF_PATH 00:07:12.480 #define SPDK_CONFIG_OPENSSL_PATH 00:07:12.480 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:12.480 #undef SPDK_CONFIG_PGO_USE 00:07:12.480 #define SPDK_CONFIG_PREFIX /usr/local 00:07:12.480 #undef SPDK_CONFIG_RAID5F 00:07:12.480 #undef SPDK_CONFIG_RBD 00:07:12.480 #define SPDK_CONFIG_RDMA 1 00:07:12.480 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:12.480 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:12.480 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:12.480 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:12.480 #define SPDK_CONFIG_SHARED 1 00:07:12.480 #undef SPDK_CONFIG_SMA 00:07:12.480 #define SPDK_CONFIG_TESTS 1 00:07:12.480 #undef SPDK_CONFIG_TSAN 00:07:12.480 #define SPDK_CONFIG_UBLK 1 00:07:12.480 #define SPDK_CONFIG_UBSAN 1 00:07:12.480 #undef SPDK_CONFIG_UNIT_TESTS 00:07:12.480 #undef SPDK_CONFIG_URING 00:07:12.480 #define SPDK_CONFIG_URING_PATH 00:07:12.480 #undef SPDK_CONFIG_URING_ZNS 00:07:12.480 #undef SPDK_CONFIG_USDT 00:07:12.480 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:12.480 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:12.480 #undef SPDK_CONFIG_VFIO_USER 00:07:12.480 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:12.480 #define SPDK_CONFIG_VHOST 1 00:07:12.480 #define SPDK_CONFIG_VIRTIO 1 00:07:12.480 #undef SPDK_CONFIG_VTUNE 00:07:12.480 #define SPDK_CONFIG_VTUNE_DIR 00:07:12.480 #define SPDK_CONFIG_WERROR 1 00:07:12.480 #define SPDK_CONFIG_WPDK_DIR 00:07:12.480 #undef SPDK_CONFIG_XNVME 00:07:12.480 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:12.480 07:26:16 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:12.480 07:26:16 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.480 07:26:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.480 07:26:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.480 07:26:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.480 07:26:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.480 07:26:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.480 07:26:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.480 07:26:16 -- paths/export.sh@5 -- # export PATH 00:07:12.480 07:26:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.480 07:26:16 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:12.480 07:26:16 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:12.480 07:26:16 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:12.480 07:26:16 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:12.480 07:26:16 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:12.480 07:26:16 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:12.480 07:26:16 -- pm/common@16 -- # TEST_TAG=N/A 00:07:12.480 07:26:16 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:12.480 07:26:16 -- common/autotest_common.sh@52 -- # : 1 00:07:12.480 07:26:16 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:12.480 07:26:16 -- common/autotest_common.sh@56 -- # : 0 00:07:12.480 07:26:16 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:12.480 07:26:16 -- common/autotest_common.sh@58 -- # : 0 00:07:12.480 07:26:16 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:12.480 07:26:16 -- common/autotest_common.sh@60 -- # : 1 00:07:12.480 07:26:16 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:12.480 07:26:16 -- common/autotest_common.sh@62 -- # : 0 00:07:12.480 07:26:16 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:12.480 07:26:16 -- common/autotest_common.sh@64 -- # : 00:07:12.480 07:26:16 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:12.480 07:26:16 -- common/autotest_common.sh@66 -- # : 0 00:07:12.480 07:26:16 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:12.480 07:26:16 -- common/autotest_common.sh@68 -- # : 0 00:07:12.480 07:26:16 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:12.480 07:26:16 -- common/autotest_common.sh@70 -- # : 0 00:07:12.480 07:26:16 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:12.480 07:26:16 -- common/autotest_common.sh@72 -- # : 0 00:07:12.480 07:26:16 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:12.480 07:26:16 -- common/autotest_common.sh@74 -- # : 0 00:07:12.480 07:26:16 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:12.481 07:26:16 -- common/autotest_common.sh@76 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:12.481 07:26:16 -- common/autotest_common.sh@78 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:12.481 07:26:16 -- common/autotest_common.sh@80 -- # : 1 00:07:12.481 07:26:16 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:12.481 07:26:16 -- common/autotest_common.sh@82 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:12.481 07:26:16 -- common/autotest_common.sh@84 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:12.481 07:26:16 -- common/autotest_common.sh@86 -- # : 1 00:07:12.481 07:26:16 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:12.481 07:26:16 -- common/autotest_common.sh@88 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:12.481 07:26:16 -- common/autotest_common.sh@90 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:12.481 07:26:16 -- common/autotest_common.sh@92 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:12.481 07:26:16 -- common/autotest_common.sh@94 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:12.481 07:26:16 -- common/autotest_common.sh@96 -- # : tcp 00:07:12.481 07:26:16 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:12.481 07:26:16 -- common/autotest_common.sh@98 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:12.481 07:26:16 -- common/autotest_common.sh@100 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:12.481 07:26:16 -- common/autotest_common.sh@102 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:12.481 07:26:16 -- common/autotest_common.sh@104 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:12.481 07:26:16 -- common/autotest_common.sh@106 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:12.481 07:26:16 -- common/autotest_common.sh@108 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:12.481 07:26:16 -- common/autotest_common.sh@110 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:12.481 07:26:16 -- common/autotest_common.sh@112 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:12.481 07:26:16 -- common/autotest_common.sh@114 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:12.481 07:26:16 -- common/autotest_common.sh@116 -- # : 1 00:07:12.481 07:26:16 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:12.481 07:26:16 -- common/autotest_common.sh@118 -- # : 00:07:12.481 07:26:16 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:12.481 07:26:16 -- common/autotest_common.sh@120 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:12.481 07:26:16 -- common/autotest_common.sh@122 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:12.481 07:26:16 -- common/autotest_common.sh@124 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:12.481 07:26:16 -- common/autotest_common.sh@126 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:12.481 07:26:16 -- common/autotest_common.sh@128 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:12.481 07:26:16 -- common/autotest_common.sh@130 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:12.481 07:26:16 -- common/autotest_common.sh@132 -- # : 00:07:12.481 07:26:16 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:12.481 07:26:16 -- common/autotest_common.sh@134 -- # : true 00:07:12.481 07:26:16 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:12.481 07:26:16 -- common/autotest_common.sh@136 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:12.481 07:26:16 -- common/autotest_common.sh@138 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:12.481 07:26:16 -- common/autotest_common.sh@140 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:12.481 07:26:16 -- common/autotest_common.sh@142 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:12.481 07:26:16 -- common/autotest_common.sh@144 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:12.481 07:26:16 -- common/autotest_common.sh@146 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:12.481 07:26:16 -- common/autotest_common.sh@148 -- # : e810 00:07:12.481 07:26:16 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:12.481 07:26:16 -- common/autotest_common.sh@150 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:12.481 07:26:16 -- common/autotest_common.sh@152 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:12.481 07:26:16 -- common/autotest_common.sh@154 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:12.481 07:26:16 -- common/autotest_common.sh@156 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:12.481 07:26:16 -- common/autotest_common.sh@158 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:12.481 07:26:16 -- common/autotest_common.sh@160 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:12.481 07:26:16 -- common/autotest_common.sh@163 -- # : 00:07:12.481 07:26:16 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:12.481 07:26:16 -- common/autotest_common.sh@165 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:12.481 07:26:16 -- common/autotest_common.sh@167 -- # : 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:12.481 07:26:16 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:12.481 07:26:16 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:12.481 07:26:16 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:12.481 07:26:16 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:12.481 07:26:16 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:12.481 07:26:16 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:12.481 07:26:16 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:12.481 07:26:16 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:12.481 07:26:16 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:12.481 07:26:16 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:12.481 07:26:16 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:12.481 07:26:16 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:12.481 07:26:16 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:12.481 07:26:16 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:12.481 07:26:16 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:12.481 07:26:16 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:12.481 07:26:16 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:12.481 07:26:16 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:12.481 07:26:16 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:12.481 07:26:16 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:12.481 07:26:16 -- common/autotest_common.sh@196 -- # cat 00:07:12.481 07:26:16 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:12.481 07:26:16 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:12.481 07:26:16 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:12.481 07:26:16 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:12.481 07:26:16 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:12.481 07:26:16 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:12.481 07:26:16 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:12.481 07:26:16 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:12.481 07:26:16 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:12.481 07:26:16 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:12.481 07:26:16 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:12.481 07:26:16 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:12.481 07:26:16 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:12.481 07:26:16 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:12.481 07:26:16 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:12.481 07:26:16 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:12.481 07:26:16 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:12.481 07:26:16 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:12.481 07:26:16 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:12.481 07:26:16 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:12.481 07:26:16 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:12.481 07:26:16 -- common/autotest_common.sh@249 -- # valgrind= 00:07:12.481 07:26:16 -- common/autotest_common.sh@255 -- # uname -s 00:07:12.481 07:26:16 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:12.481 07:26:16 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:12.481 07:26:16 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:12.481 07:26:16 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:12.481 07:26:16 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:12.481 07:26:16 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:12.481 07:26:16 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:12.481 07:26:16 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j96 00:07:12.481 07:26:16 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:12.481 07:26:16 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:12.481 07:26:16 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:12.481 07:26:16 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:12.481 07:26:16 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:12.481 07:26:16 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:12.481 07:26:16 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:12.481 07:26:16 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:12.481 07:26:16 -- common/autotest_common.sh@309 -- # [[ -z 3970149 ]] 00:07:12.481 07:26:16 -- common/autotest_common.sh@309 -- # kill -0 3970149 00:07:12.481 07:26:16 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:12.481 07:26:16 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:12.481 07:26:16 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:12.481 07:26:16 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:12.481 07:26:16 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:12.481 07:26:16 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:12.481 07:26:16 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:12.481 07:26:16 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:12.481 07:26:16 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.M7P2kR 00:07:12.481 07:26:16 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:12.481 07:26:16 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:12.481 07:26:16 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:12.481 07:26:16 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.M7P2kR/tests/target /tmp/spdk.M7P2kR 00:07:12.481 07:26:16 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:12.481 07:26:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:12.481 07:26:16 -- common/autotest_common.sh@318 -- # df -T 00:07:12.481 07:26:16 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:12.481 07:26:16 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:07:12.481 07:26:16 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:12.481 07:26:16 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:07:12.481 07:26:16 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:07:12.481 07:26:16 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:12.481 07:26:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:12.481 07:26:16 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:07:12.481 07:26:16 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:07:12.481 07:26:16 -- common/autotest_common.sh@353 -- # avails["$mount"]=4096 00:07:12.481 07:26:16 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:07:12.481 07:26:16 -- common/autotest_common.sh@354 -- # uses["$mount"]=5284425728 00:07:12.481 07:26:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:12.481 07:26:16 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:07:12.481 07:26:16 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:07:12.481 07:26:16 -- common/autotest_common.sh@353 -- # avails["$mount"]=84568629248 00:07:12.481 07:26:16 -- common/autotest_common.sh@353 -- # sizes["$mount"]=95552409600 00:07:12.481 07:26:16 -- common/autotest_common.sh@354 -- # uses["$mount"]=10983780352 00:07:12.481 07:26:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:12.481 07:26:16 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:12.481 07:26:16 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:12.481 07:26:16 -- common/autotest_common.sh@353 -- # avails["$mount"]=47773609984 00:07:12.481 07:26:16 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47776202752 00:07:12.481 07:26:16 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:07:12.481 07:26:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:12.481 07:26:16 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:12.481 07:26:16 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:12.481 07:26:16 -- common/autotest_common.sh@353 -- # avails["$mount"]=19101106176 00:07:12.481 07:26:16 -- common/autotest_common.sh@353 -- # sizes["$mount"]=19110481920 00:07:12.481 07:26:16 -- common/autotest_common.sh@354 -- # uses["$mount"]=9375744 00:07:12.481 07:26:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:12.481 07:26:16 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:12.481 07:26:16 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:12.481 07:26:16 -- common/autotest_common.sh@353 -- # avails["$mount"]=46700912640 00:07:12.481 07:26:16 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47776206848 00:07:12.481 07:26:16 -- common/autotest_common.sh@354 -- # uses["$mount"]=1075294208 00:07:12.481 07:26:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:12.481 07:26:16 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:12.481 07:26:16 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:12.481 07:26:16 -- common/autotest_common.sh@353 -- # avails["$mount"]=9555226624 00:07:12.481 07:26:16 -- common/autotest_common.sh@353 -- # sizes["$mount"]=9555238912 00:07:12.481 07:26:16 -- common/autotest_common.sh@354 -- # uses["$mount"]=12288 00:07:12.481 07:26:16 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:12.481 07:26:16 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:12.481 * Looking for test storage... 00:07:12.481 07:26:16 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:12.481 07:26:16 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:12.481 07:26:16 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.481 07:26:16 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:12.481 07:26:16 -- common/autotest_common.sh@363 -- # mount=/ 00:07:12.481 07:26:16 -- common/autotest_common.sh@365 -- # target_space=84568629248 00:07:12.481 07:26:16 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:12.481 07:26:16 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:12.481 07:26:16 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:07:12.481 07:26:16 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:07:12.481 07:26:16 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:07:12.481 07:26:16 -- common/autotest_common.sh@372 -- # new_size=13198372864 00:07:12.481 07:26:16 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:12.481 07:26:16 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.481 07:26:16 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.481 07:26:16 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.481 07:26:16 -- common/autotest_common.sh@380 -- # return 0 00:07:12.481 07:26:16 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:12.481 07:26:16 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:12.481 07:26:16 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:12.481 07:26:16 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:12.481 07:26:16 -- common/autotest_common.sh@1672 -- # true 00:07:12.481 07:26:16 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:12.481 07:26:16 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:12.481 07:26:16 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:12.481 07:26:16 -- common/autotest_common.sh@27 -- # exec 00:07:12.481 07:26:16 -- common/autotest_common.sh@29 -- # exec 00:07:12.481 07:26:16 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:12.482 07:26:16 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:12.482 07:26:16 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:12.482 07:26:16 -- common/autotest_common.sh@18 -- # set -x 00:07:12.482 07:26:16 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.482 07:26:16 -- nvmf/common.sh@7 -- # uname -s 00:07:12.482 07:26:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.482 07:26:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.482 07:26:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.482 07:26:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.482 07:26:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.482 07:26:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.482 07:26:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.482 07:26:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.482 07:26:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.482 07:26:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.482 07:26:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:12.482 07:26:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:12.482 07:26:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.482 07:26:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.482 07:26:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.482 07:26:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.482 07:26:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.741 07:26:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.741 07:26:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.741 07:26:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.741 07:26:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.741 07:26:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.741 07:26:16 -- paths/export.sh@5 -- # export PATH 00:07:12.741 07:26:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.741 07:26:16 -- nvmf/common.sh@46 -- # : 0 00:07:12.741 07:26:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:12.741 07:26:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:12.741 07:26:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:12.741 07:26:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.741 07:26:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.741 07:26:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:12.741 07:26:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:12.741 07:26:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:12.741 07:26:16 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:12.741 07:26:16 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:12.741 07:26:16 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:12.741 07:26:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:12.741 07:26:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.741 07:26:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:12.741 07:26:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:12.741 07:26:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:12.741 07:26:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.741 07:26:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.741 07:26:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.741 07:26:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:12.741 07:26:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:12.741 07:26:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:12.741 07:26:16 -- common/autotest_common.sh@10 -- # set +x 00:07:18.034 07:26:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:18.034 07:26:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:18.034 07:26:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:18.034 07:26:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:18.034 07:26:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:18.034 07:26:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:18.034 07:26:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:18.034 07:26:21 -- nvmf/common.sh@294 -- # net_devs=() 00:07:18.034 07:26:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:18.034 07:26:21 -- nvmf/common.sh@295 -- # e810=() 00:07:18.034 07:26:21 -- nvmf/common.sh@295 -- # local -ga e810 00:07:18.034 07:26:21 -- nvmf/common.sh@296 -- # x722=() 00:07:18.034 07:26:21 -- nvmf/common.sh@296 -- # local -ga x722 00:07:18.034 07:26:21 -- nvmf/common.sh@297 -- # mlx=() 00:07:18.034 07:26:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:18.034 07:26:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.034 07:26:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.034 07:26:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.034 07:26:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.034 07:26:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.034 07:26:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.034 07:26:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.034 07:26:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.034 07:26:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.034 07:26:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.034 07:26:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.034 07:26:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:18.034 07:26:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:18.034 07:26:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:18.034 07:26:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:18.034 07:26:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:18.034 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:18.034 07:26:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:18.034 07:26:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:18.034 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:18.034 07:26:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:18.034 07:26:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:18.034 07:26:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.034 07:26:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:18.034 07:26:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.034 07:26:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:18.034 Found net devices under 0000:af:00.0: cvl_0_0 00:07:18.034 07:26:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.034 07:26:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:18.034 07:26:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.034 07:26:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:18.034 07:26:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.034 07:26:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:18.034 Found net devices under 0000:af:00.1: cvl_0_1 00:07:18.034 07:26:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.034 07:26:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:18.034 07:26:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:18.034 07:26:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:18.034 07:26:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.034 07:26:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.034 07:26:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.034 07:26:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:18.034 07:26:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.034 07:26:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.034 07:26:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:18.034 07:26:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.034 07:26:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.034 07:26:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:18.034 07:26:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:18.034 07:26:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.034 07:26:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.034 07:26:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.034 07:26:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.034 07:26:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:18.034 07:26:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.034 07:26:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.034 07:26:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.034 07:26:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:18.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:07:18.034 00:07:18.034 --- 10.0.0.2 ping statistics --- 00:07:18.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.034 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:07:18.034 07:26:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:07:18.034 00:07:18.034 --- 10.0.0.1 ping statistics --- 00:07:18.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.034 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:07:18.034 07:26:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.034 07:26:21 -- nvmf/common.sh@410 -- # return 0 00:07:18.034 07:26:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:18.034 07:26:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.034 07:26:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:18.034 07:26:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.034 07:26:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:18.034 07:26:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:18.035 07:26:21 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:18.035 07:26:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:18.035 07:26:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.035 07:26:21 -- common/autotest_common.sh@10 -- # set +x 00:07:18.035 ************************************ 00:07:18.035 START TEST nvmf_filesystem_no_in_capsule 00:07:18.035 ************************************ 00:07:18.035 07:26:21 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:18.035 07:26:21 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:18.035 07:26:21 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:18.035 07:26:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:18.035 07:26:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:18.035 07:26:21 -- common/autotest_common.sh@10 -- # set +x 00:07:18.035 07:26:21 -- nvmf/common.sh@469 -- # nvmfpid=3973237 00:07:18.035 07:26:21 -- nvmf/common.sh@470 -- # waitforlisten 3973237 00:07:18.035 07:26:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:18.035 07:26:21 -- common/autotest_common.sh@819 -- # '[' -z 3973237 ']' 00:07:18.035 07:26:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.035 07:26:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:18.035 07:26:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.035 07:26:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:18.035 07:26:21 -- common/autotest_common.sh@10 -- # set +x 00:07:18.035 [2024-10-07 07:26:21.980339] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:18.035 [2024-10-07 07:26:21.980388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.294 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.294 [2024-10-07 07:26:22.030320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.294 [2024-10-07 07:26:22.100248] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:18.294 [2024-10-07 07:26:22.100356] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.294 [2024-10-07 07:26:22.100364] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.294 [2024-10-07 07:26:22.100370] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.294 [2024-10-07 07:26:22.100462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.294 [2024-10-07 07:26:22.100590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.294 [2024-10-07 07:26:22.100655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.294 [2024-10-07 07:26:22.100656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.862 07:26:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:18.862 07:26:22 -- common/autotest_common.sh@852 -- # return 0 00:07:18.862 07:26:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:18.862 07:26:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:18.862 07:26:22 -- common/autotest_common.sh@10 -- # set +x 00:07:19.122 07:26:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.122 07:26:22 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:19.122 07:26:22 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:19.122 07:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.122 07:26:22 -- common/autotest_common.sh@10 -- # set +x 00:07:19.122 [2024-10-07 07:26:22.840470] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.122 07:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.122 07:26:22 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:19.122 07:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.122 07:26:22 -- common/autotest_common.sh@10 -- # set +x 00:07:19.122 Malloc1 00:07:19.122 07:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.122 07:26:22 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:19.122 07:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.122 07:26:22 -- common/autotest_common.sh@10 -- # set +x 00:07:19.122 07:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.122 07:26:22 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:19.122 07:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.122 07:26:22 -- common/autotest_common.sh@10 -- # set +x 00:07:19.122 07:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.122 07:26:22 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.122 07:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.122 07:26:22 -- common/autotest_common.sh@10 -- # set +x 00:07:19.122 [2024-10-07 07:26:22.993372] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.122 07:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.122 07:26:22 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:19.122 07:26:22 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:19.122 07:26:22 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:19.122 07:26:22 -- common/autotest_common.sh@1359 -- # local bs 00:07:19.122 07:26:22 -- common/autotest_common.sh@1360 -- # local nb 00:07:19.122 07:26:23 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:19.122 07:26:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.122 07:26:23 -- common/autotest_common.sh@10 -- # set +x 00:07:19.122 07:26:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.122 07:26:23 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:19.122 { 00:07:19.122 "name": "Malloc1", 00:07:19.122 "aliases": [ 00:07:19.122 "9e0088ce-2b11-4301-9105-6d4ab19c5e1c" 00:07:19.122 ], 00:07:19.122 "product_name": "Malloc disk", 00:07:19.122 "block_size": 512, 00:07:19.122 "num_blocks": 1048576, 00:07:19.122 "uuid": "9e0088ce-2b11-4301-9105-6d4ab19c5e1c", 00:07:19.122 "assigned_rate_limits": { 00:07:19.122 "rw_ios_per_sec": 0, 00:07:19.122 "rw_mbytes_per_sec": 0, 00:07:19.122 "r_mbytes_per_sec": 0, 00:07:19.122 "w_mbytes_per_sec": 0 00:07:19.122 }, 00:07:19.122 "claimed": true, 00:07:19.122 "claim_type": "exclusive_write", 00:07:19.122 "zoned": false, 00:07:19.122 "supported_io_types": { 00:07:19.122 "read": true, 00:07:19.122 "write": true, 00:07:19.122 "unmap": true, 00:07:19.122 "write_zeroes": true, 00:07:19.122 "flush": true, 00:07:19.122 "reset": true, 00:07:19.122 "compare": false, 00:07:19.122 "compare_and_write": false, 00:07:19.122 "abort": true, 00:07:19.122 "nvme_admin": false, 00:07:19.122 "nvme_io": false 00:07:19.122 }, 00:07:19.122 "memory_domains": [ 00:07:19.122 { 00:07:19.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.122 "dma_device_type": 2 00:07:19.122 } 00:07:19.122 ], 00:07:19.122 "driver_specific": {} 00:07:19.122 } 00:07:19.122 ]' 00:07:19.122 07:26:23 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:19.122 07:26:23 -- common/autotest_common.sh@1362 -- # bs=512 00:07:19.122 07:26:23 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:19.381 07:26:23 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:19.381 07:26:23 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:19.381 07:26:23 -- common/autotest_common.sh@1367 -- # echo 512 00:07:19.381 07:26:23 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:19.381 07:26:23 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:20.759 07:26:24 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:20.759 07:26:24 -- common/autotest_common.sh@1177 -- # local i=0 00:07:20.759 07:26:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:20.759 07:26:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:20.759 07:26:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:22.668 07:26:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:22.668 07:26:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:22.668 07:26:26 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:22.668 07:26:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:22.668 07:26:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:22.668 07:26:26 -- common/autotest_common.sh@1187 -- # return 0 00:07:22.668 07:26:26 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:22.668 07:26:26 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:22.668 07:26:26 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:22.668 07:26:26 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:22.668 07:26:26 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:22.668 07:26:26 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:22.668 07:26:26 -- setup/common.sh@80 -- # echo 536870912 00:07:22.668 07:26:26 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:22.668 07:26:26 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:22.668 07:26:26 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:22.668 07:26:26 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:22.927 07:26:26 -- target/filesystem.sh@69 -- # partprobe 00:07:22.927 07:26:26 -- target/filesystem.sh@70 -- # sleep 1 00:07:23.866 07:26:27 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:23.866 07:26:27 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:23.866 07:26:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:23.866 07:26:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.866 07:26:27 -- common/autotest_common.sh@10 -- # set +x 00:07:23.866 ************************************ 00:07:23.866 START TEST filesystem_ext4 00:07:23.866 ************************************ 00:07:23.866 07:26:27 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:23.866 07:26:27 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:23.866 07:26:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:23.866 07:26:27 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:23.866 07:26:27 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:23.866 07:26:27 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:23.866 07:26:27 -- common/autotest_common.sh@904 -- # local i=0 00:07:23.866 07:26:27 -- common/autotest_common.sh@905 -- # local force 00:07:23.866 07:26:27 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:23.866 07:26:27 -- common/autotest_common.sh@908 -- # force=-F 00:07:23.866 07:26:27 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:23.866 mke2fs 1.47.0 (5-Feb-2023) 00:07:24.125 Discarding device blocks: 0/522240 done 00:07:24.125 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:24.125 Filesystem UUID: 67c72e75-77b6-4007-84a2-4cff6d4577ad 00:07:24.125 Superblock backups stored on blocks: 00:07:24.125 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:24.125 00:07:24.125 Allocating group tables: 0/64 done 00:07:24.125 Writing inode tables: 0/64 done 00:07:24.384 Creating journal (8192 blocks): done 00:07:26.589 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:07:26.589 00:07:26.589 07:26:30 -- common/autotest_common.sh@921 -- # return 0 00:07:26.589 07:26:30 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:33.161 07:26:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:33.161 07:26:36 -- target/filesystem.sh@25 -- # sync 00:07:33.161 07:26:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:33.161 07:26:36 -- target/filesystem.sh@27 -- # sync 00:07:33.161 07:26:36 -- target/filesystem.sh@29 -- # i=0 00:07:33.161 07:26:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:33.161 07:26:36 -- target/filesystem.sh@37 -- # kill -0 3973237 00:07:33.161 07:26:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:33.161 07:26:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:33.161 07:26:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:33.161 07:26:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:33.161 00:07:33.161 real 0m8.709s 00:07:33.161 user 0m0.029s 00:07:33.161 sys 0m0.073s 00:07:33.161 07:26:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.161 07:26:36 -- common/autotest_common.sh@10 -- # set +x 00:07:33.161 ************************************ 00:07:33.161 END TEST filesystem_ext4 00:07:33.161 ************************************ 00:07:33.161 07:26:36 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:33.161 07:26:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:33.161 07:26:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.161 07:26:36 -- common/autotest_common.sh@10 -- # set +x 00:07:33.161 ************************************ 00:07:33.162 START TEST filesystem_btrfs 00:07:33.162 ************************************ 00:07:33.162 07:26:36 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:33.162 07:26:36 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:33.162 07:26:36 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:33.162 07:26:36 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:33.162 07:26:36 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:33.162 07:26:36 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:33.162 07:26:36 -- common/autotest_common.sh@904 -- # local i=0 00:07:33.162 07:26:36 -- common/autotest_common.sh@905 -- # local force 00:07:33.162 07:26:36 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:33.162 07:26:36 -- common/autotest_common.sh@910 -- # force=-f 00:07:33.162 07:26:36 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:33.162 btrfs-progs v6.8.1 00:07:33.162 See https://btrfs.readthedocs.io for more information. 00:07:33.162 00:07:33.162 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:33.162 NOTE: several default settings have changed in version 5.15, please make sure 00:07:33.162 this does not affect your deployments: 00:07:33.162 - DUP for metadata (-m dup) 00:07:33.162 - enabled no-holes (-O no-holes) 00:07:33.162 - enabled free-space-tree (-R free-space-tree) 00:07:33.162 00:07:33.162 Label: (null) 00:07:33.162 UUID: e2464029-679f-4602-9585-5f18a66b1c41 00:07:33.162 Node size: 16384 00:07:33.162 Sector size: 4096 (CPU page size: 4096) 00:07:33.162 Filesystem size: 510.00MiB 00:07:33.162 Block group profiles: 00:07:33.162 Data: single 8.00MiB 00:07:33.162 Metadata: DUP 32.00MiB 00:07:33.162 System: DUP 8.00MiB 00:07:33.162 SSD detected: yes 00:07:33.162 Zoned device: no 00:07:33.162 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:33.162 Checksum: crc32c 00:07:33.162 Number of devices: 1 00:07:33.162 Devices: 00:07:33.162 ID SIZE PATH 00:07:33.162 1 510.00MiB /dev/nvme0n1p1 00:07:33.162 00:07:33.162 07:26:36 -- common/autotest_common.sh@921 -- # return 0 00:07:33.162 07:26:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:33.421 07:26:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:33.421 07:26:37 -- target/filesystem.sh@25 -- # sync 00:07:33.421 07:26:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:33.421 07:26:37 -- target/filesystem.sh@27 -- # sync 00:07:33.421 07:26:37 -- target/filesystem.sh@29 -- # i=0 00:07:33.421 07:26:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:33.421 07:26:37 -- target/filesystem.sh@37 -- # kill -0 3973237 00:07:33.421 07:26:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:33.422 07:26:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:33.422 07:26:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:33.422 07:26:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:33.422 00:07:33.422 real 0m0.725s 00:07:33.422 user 0m0.030s 00:07:33.422 sys 0m0.109s 00:07:33.422 07:26:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.422 07:26:37 -- common/autotest_common.sh@10 -- # set +x 00:07:33.422 ************************************ 00:07:33.422 END TEST filesystem_btrfs 00:07:33.422 ************************************ 00:07:33.422 07:26:37 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:33.422 07:26:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:33.422 07:26:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.422 07:26:37 -- common/autotest_common.sh@10 -- # set +x 00:07:33.422 ************************************ 00:07:33.422 START TEST filesystem_xfs 00:07:33.422 ************************************ 00:07:33.422 07:26:37 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:33.422 07:26:37 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:33.422 07:26:37 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:33.422 07:26:37 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:33.422 07:26:37 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:33.422 07:26:37 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:33.422 07:26:37 -- common/autotest_common.sh@904 -- # local i=0 00:07:33.422 07:26:37 -- common/autotest_common.sh@905 -- # local force 00:07:33.422 07:26:37 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:33.422 07:26:37 -- common/autotest_common.sh@910 -- # force=-f 00:07:33.422 07:26:37 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:33.681 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:33.681 = sectsz=512 attr=2, projid32bit=1 00:07:33.681 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:33.681 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:33.681 data = bsize=4096 blocks=130560, imaxpct=25 00:07:33.681 = sunit=0 swidth=0 blks 00:07:33.681 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:33.681 log =internal log bsize=4096 blocks=16384, version=2 00:07:33.681 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:33.681 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:34.248 Discarding blocks...Done. 00:07:34.248 07:26:38 -- common/autotest_common.sh@921 -- # return 0 00:07:34.248 07:26:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:36.157 07:26:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:36.157 07:26:40 -- target/filesystem.sh@25 -- # sync 00:07:36.157 07:26:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:36.157 07:26:40 -- target/filesystem.sh@27 -- # sync 00:07:36.157 07:26:40 -- target/filesystem.sh@29 -- # i=0 00:07:36.157 07:26:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:36.157 07:26:40 -- target/filesystem.sh@37 -- # kill -0 3973237 00:07:36.157 07:26:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:36.157 07:26:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:36.157 07:26:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:36.157 07:26:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:36.157 00:07:36.157 real 0m2.723s 00:07:36.157 user 0m0.021s 00:07:36.157 sys 0m0.076s 00:07:36.157 07:26:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.157 07:26:40 -- common/autotest_common.sh@10 -- # set +x 00:07:36.157 ************************************ 00:07:36.157 END TEST filesystem_xfs 00:07:36.157 ************************************ 00:07:36.157 07:26:40 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:36.157 07:26:40 -- target/filesystem.sh@93 -- # sync 00:07:36.157 07:26:40 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:36.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.444 07:26:40 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:36.444 07:26:40 -- common/autotest_common.sh@1198 -- # local i=0 00:07:36.444 07:26:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:36.444 07:26:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:36.444 07:26:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:36.444 07:26:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:36.445 07:26:40 -- common/autotest_common.sh@1210 -- # return 0 00:07:36.445 07:26:40 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:36.445 07:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.445 07:26:40 -- common/autotest_common.sh@10 -- # set +x 00:07:36.445 07:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.445 07:26:40 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:36.445 07:26:40 -- target/filesystem.sh@101 -- # killprocess 3973237 00:07:36.445 07:26:40 -- common/autotest_common.sh@926 -- # '[' -z 3973237 ']' 00:07:36.445 07:26:40 -- common/autotest_common.sh@930 -- # kill -0 3973237 00:07:36.445 07:26:40 -- common/autotest_common.sh@931 -- # uname 00:07:36.445 07:26:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:36.445 07:26:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3973237 00:07:36.445 07:26:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:36.445 07:26:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:36.445 07:26:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3973237' 00:07:36.445 killing process with pid 3973237 00:07:36.445 07:26:40 -- common/autotest_common.sh@945 -- # kill 3973237 00:07:36.445 07:26:40 -- common/autotest_common.sh@950 -- # wait 3973237 00:07:36.735 07:26:40 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:36.735 00:07:36.735 real 0m18.740s 00:07:36.736 user 1m13.772s 00:07:36.736 sys 0m1.427s 00:07:36.736 07:26:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.736 07:26:40 -- common/autotest_common.sh@10 -- # set +x 00:07:36.736 ************************************ 00:07:36.736 END TEST nvmf_filesystem_no_in_capsule 00:07:36.736 ************************************ 00:07:36.995 07:26:40 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:36.995 07:26:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:36.995 07:26:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.995 07:26:40 -- common/autotest_common.sh@10 -- # set +x 00:07:36.995 ************************************ 00:07:36.995 START TEST nvmf_filesystem_in_capsule 00:07:36.996 ************************************ 00:07:36.996 07:26:40 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:07:36.996 07:26:40 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:36.996 07:26:40 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:36.996 07:26:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:36.996 07:26:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:36.996 07:26:40 -- common/autotest_common.sh@10 -- # set +x 00:07:36.996 07:26:40 -- nvmf/common.sh@469 -- # nvmfpid=3976451 00:07:36.996 07:26:40 -- nvmf/common.sh@470 -- # waitforlisten 3976451 00:07:36.996 07:26:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:36.996 07:26:40 -- common/autotest_common.sh@819 -- # '[' -z 3976451 ']' 00:07:36.996 07:26:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.996 07:26:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:36.996 07:26:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.996 07:26:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:36.996 07:26:40 -- common/autotest_common.sh@10 -- # set +x 00:07:36.996 [2024-10-07 07:26:40.768018] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:36.996 [2024-10-07 07:26:40.768067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.996 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.996 [2024-10-07 07:26:40.825701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.996 [2024-10-07 07:26:40.902379] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:36.996 [2024-10-07 07:26:40.902488] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.996 [2024-10-07 07:26:40.902496] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.996 [2024-10-07 07:26:40.902503] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.996 [2024-10-07 07:26:40.902545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.996 [2024-10-07 07:26:40.902653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.996 [2024-10-07 07:26:40.902739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.996 [2024-10-07 07:26:40.902740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.934 07:26:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:37.934 07:26:41 -- common/autotest_common.sh@852 -- # return 0 00:07:37.934 07:26:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:37.934 07:26:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:37.934 07:26:41 -- common/autotest_common.sh@10 -- # set +x 00:07:37.934 07:26:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.934 07:26:41 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:37.934 07:26:41 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:37.934 07:26:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.934 07:26:41 -- common/autotest_common.sh@10 -- # set +x 00:07:37.934 [2024-10-07 07:26:41.634425] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.934 07:26:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.934 07:26:41 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:37.934 07:26:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.934 07:26:41 -- common/autotest_common.sh@10 -- # set +x 00:07:37.934 Malloc1 00:07:37.934 07:26:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.934 07:26:41 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:37.934 07:26:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.934 07:26:41 -- common/autotest_common.sh@10 -- # set +x 00:07:37.934 07:26:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.934 07:26:41 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:37.934 07:26:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.934 07:26:41 -- common/autotest_common.sh@10 -- # set +x 00:07:37.934 07:26:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.934 07:26:41 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.934 07:26:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.934 07:26:41 -- common/autotest_common.sh@10 -- # set +x 00:07:37.934 [2024-10-07 07:26:41.777400] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.934 07:26:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.934 07:26:41 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:37.934 07:26:41 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:37.934 07:26:41 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:37.934 07:26:41 -- common/autotest_common.sh@1359 -- # local bs 00:07:37.934 07:26:41 -- common/autotest_common.sh@1360 -- # local nb 00:07:37.934 07:26:41 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:37.934 07:26:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.934 07:26:41 -- common/autotest_common.sh@10 -- # set +x 00:07:37.934 07:26:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.934 07:26:41 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:37.934 { 00:07:37.934 "name": "Malloc1", 00:07:37.934 "aliases": [ 00:07:37.934 "86b0fa4a-30e4-4e9f-9da6-34b3ef70f671" 00:07:37.934 ], 00:07:37.934 "product_name": "Malloc disk", 00:07:37.934 "block_size": 512, 00:07:37.934 "num_blocks": 1048576, 00:07:37.934 "uuid": "86b0fa4a-30e4-4e9f-9da6-34b3ef70f671", 00:07:37.934 "assigned_rate_limits": { 00:07:37.934 "rw_ios_per_sec": 0, 00:07:37.934 "rw_mbytes_per_sec": 0, 00:07:37.934 "r_mbytes_per_sec": 0, 00:07:37.934 "w_mbytes_per_sec": 0 00:07:37.934 }, 00:07:37.934 "claimed": true, 00:07:37.934 "claim_type": "exclusive_write", 00:07:37.934 "zoned": false, 00:07:37.934 "supported_io_types": { 00:07:37.934 "read": true, 00:07:37.934 "write": true, 00:07:37.934 "unmap": true, 00:07:37.934 "write_zeroes": true, 00:07:37.934 "flush": true, 00:07:37.934 "reset": true, 00:07:37.934 "compare": false, 00:07:37.934 "compare_and_write": false, 00:07:37.934 "abort": true, 00:07:37.934 "nvme_admin": false, 00:07:37.934 "nvme_io": false 00:07:37.934 }, 00:07:37.934 "memory_domains": [ 00:07:37.934 { 00:07:37.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.935 "dma_device_type": 2 00:07:37.935 } 00:07:37.935 ], 00:07:37.935 "driver_specific": {} 00:07:37.935 } 00:07:37.935 ]' 00:07:37.935 07:26:41 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:37.935 07:26:41 -- common/autotest_common.sh@1362 -- # bs=512 00:07:37.935 07:26:41 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:37.935 07:26:41 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:37.935 07:26:41 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:37.935 07:26:41 -- common/autotest_common.sh@1367 -- # echo 512 00:07:37.935 07:26:41 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:37.935 07:26:41 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:39.315 07:26:43 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:39.315 07:26:43 -- common/autotest_common.sh@1177 -- # local i=0 00:07:39.315 07:26:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:39.315 07:26:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:39.315 07:26:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:41.222 07:26:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:41.222 07:26:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:41.222 07:26:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:41.222 07:26:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:41.222 07:26:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:41.222 07:26:45 -- common/autotest_common.sh@1187 -- # return 0 00:07:41.222 07:26:45 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:41.222 07:26:45 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:41.222 07:26:45 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:41.222 07:26:45 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:41.222 07:26:45 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:41.222 07:26:45 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:41.222 07:26:45 -- setup/common.sh@80 -- # echo 536870912 00:07:41.222 07:26:45 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:41.222 07:26:45 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:41.222 07:26:45 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:41.222 07:26:45 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:41.791 07:26:45 -- target/filesystem.sh@69 -- # partprobe 00:07:42.050 07:26:45 -- target/filesystem.sh@70 -- # sleep 1 00:07:43.428 07:26:46 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:43.428 07:26:46 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:43.428 07:26:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:43.428 07:26:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.428 07:26:46 -- common/autotest_common.sh@10 -- # set +x 00:07:43.428 ************************************ 00:07:43.428 START TEST filesystem_in_capsule_ext4 00:07:43.428 ************************************ 00:07:43.428 07:26:46 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:43.428 07:26:46 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:43.428 07:26:46 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.428 07:26:46 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:43.428 07:26:46 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:43.428 07:26:46 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:43.428 07:26:46 -- common/autotest_common.sh@904 -- # local i=0 00:07:43.428 07:26:46 -- common/autotest_common.sh@905 -- # local force 00:07:43.428 07:26:46 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:43.428 07:26:46 -- common/autotest_common.sh@908 -- # force=-F 00:07:43.428 07:26:46 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:43.428 mke2fs 1.47.0 (5-Feb-2023) 00:07:43.428 Discarding device blocks: 0/522240 done 00:07:43.428 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:43.428 Filesystem UUID: a1b586a5-26e5-4809-8f8f-d1da02f78088 00:07:43.428 Superblock backups stored on blocks: 00:07:43.428 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:43.428 00:07:43.429 Allocating group tables: 0/64 done 00:07:43.429 Writing inode tables: 0/64 done 00:07:45.962 Creating journal (8192 blocks): done 00:07:48.276 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:07:48.276 00:07:48.276 07:26:52 -- common/autotest_common.sh@921 -- # return 0 00:07:48.277 07:26:52 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.849 07:26:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.849 07:26:57 -- target/filesystem.sh@25 -- # sync 00:07:54.849 07:26:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.849 07:26:57 -- target/filesystem.sh@27 -- # sync 00:07:54.849 07:26:57 -- target/filesystem.sh@29 -- # i=0 00:07:54.849 07:26:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.849 07:26:57 -- target/filesystem.sh@37 -- # kill -0 3976451 00:07:54.849 07:26:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.849 07:26:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.849 07:26:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.849 07:26:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.849 00:07:54.849 real 0m10.680s 00:07:54.849 user 0m0.035s 00:07:54.849 sys 0m0.069s 00:07:54.849 07:26:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.849 07:26:57 -- common/autotest_common.sh@10 -- # set +x 00:07:54.849 ************************************ 00:07:54.849 END TEST filesystem_in_capsule_ext4 00:07:54.849 ************************************ 00:07:54.849 07:26:57 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:54.849 07:26:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:54.849 07:26:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.849 07:26:57 -- common/autotest_common.sh@10 -- # set +x 00:07:54.849 ************************************ 00:07:54.849 START TEST filesystem_in_capsule_btrfs 00:07:54.849 ************************************ 00:07:54.849 07:26:57 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:54.849 07:26:57 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:54.849 07:26:57 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.849 07:26:57 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:54.849 07:26:57 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:54.849 07:26:57 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:54.849 07:26:57 -- common/autotest_common.sh@904 -- # local i=0 00:07:54.849 07:26:57 -- common/autotest_common.sh@905 -- # local force 00:07:54.849 07:26:57 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:54.849 07:26:57 -- common/autotest_common.sh@910 -- # force=-f 00:07:54.849 07:26:57 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:54.849 btrfs-progs v6.8.1 00:07:54.849 See https://btrfs.readthedocs.io for more information. 00:07:54.849 00:07:54.849 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:54.849 NOTE: several default settings have changed in version 5.15, please make sure 00:07:54.849 this does not affect your deployments: 00:07:54.849 - DUP for metadata (-m dup) 00:07:54.849 - enabled no-holes (-O no-holes) 00:07:54.849 - enabled free-space-tree (-R free-space-tree) 00:07:54.849 00:07:54.849 Label: (null) 00:07:54.849 UUID: 7ef8def3-e381-4287-9ff3-6c0002e825af 00:07:54.849 Node size: 16384 00:07:54.849 Sector size: 4096 (CPU page size: 4096) 00:07:54.849 Filesystem size: 510.00MiB 00:07:54.849 Block group profiles: 00:07:54.849 Data: single 8.00MiB 00:07:54.849 Metadata: DUP 32.00MiB 00:07:54.849 System: DUP 8.00MiB 00:07:54.849 SSD detected: yes 00:07:54.849 Zoned device: no 00:07:54.849 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:54.849 Checksum: crc32c 00:07:54.849 Number of devices: 1 00:07:54.849 Devices: 00:07:54.849 ID SIZE PATH 00:07:54.850 1 510.00MiB /dev/nvme0n1p1 00:07:54.850 00:07:54.850 07:26:58 -- common/autotest_common.sh@921 -- # return 0 00:07:54.850 07:26:58 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.850 07:26:58 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.850 07:26:58 -- target/filesystem.sh@25 -- # sync 00:07:54.850 07:26:58 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.850 07:26:58 -- target/filesystem.sh@27 -- # sync 00:07:54.850 07:26:58 -- target/filesystem.sh@29 -- # i=0 00:07:54.850 07:26:58 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.850 07:26:58 -- target/filesystem.sh@37 -- # kill -0 3976451 00:07:54.850 07:26:58 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.850 07:26:58 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.850 07:26:58 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.850 07:26:58 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.850 00:07:54.850 real 0m0.804s 00:07:54.850 user 0m0.033s 00:07:54.850 sys 0m0.106s 00:07:54.850 07:26:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.850 07:26:58 -- common/autotest_common.sh@10 -- # set +x 00:07:54.850 ************************************ 00:07:54.850 END TEST filesystem_in_capsule_btrfs 00:07:54.850 ************************************ 00:07:54.850 07:26:58 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:54.850 07:26:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:54.850 07:26:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.850 07:26:58 -- common/autotest_common.sh@10 -- # set +x 00:07:54.850 ************************************ 00:07:54.850 START TEST filesystem_in_capsule_xfs 00:07:54.850 ************************************ 00:07:54.850 07:26:58 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:54.850 07:26:58 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:54.850 07:26:58 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.850 07:26:58 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:54.850 07:26:58 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:54.850 07:26:58 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:54.850 07:26:58 -- common/autotest_common.sh@904 -- # local i=0 00:07:54.850 07:26:58 -- common/autotest_common.sh@905 -- # local force 00:07:54.850 07:26:58 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:54.850 07:26:58 -- common/autotest_common.sh@910 -- # force=-f 00:07:54.850 07:26:58 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:54.850 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:54.850 = sectsz=512 attr=2, projid32bit=1 00:07:54.850 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:54.850 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:54.850 data = bsize=4096 blocks=130560, imaxpct=25 00:07:54.850 = sunit=0 swidth=0 blks 00:07:54.850 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:54.850 log =internal log bsize=4096 blocks=16384, version=2 00:07:54.850 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:54.850 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:55.788 Discarding blocks...Done. 00:07:55.788 07:26:59 -- common/autotest_common.sh@921 -- # return 0 00:07:55.788 07:26:59 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.692 07:27:01 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.692 07:27:01 -- target/filesystem.sh@25 -- # sync 00:07:57.692 07:27:01 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.692 07:27:01 -- target/filesystem.sh@27 -- # sync 00:07:57.692 07:27:01 -- target/filesystem.sh@29 -- # i=0 00:07:57.692 07:27:01 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.692 07:27:01 -- target/filesystem.sh@37 -- # kill -0 3976451 00:07:57.692 07:27:01 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.692 07:27:01 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.692 07:27:01 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.692 07:27:01 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.692 00:07:57.692 real 0m2.899s 00:07:57.692 user 0m0.029s 00:07:57.692 sys 0m0.069s 00:07:57.692 07:27:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.692 07:27:01 -- common/autotest_common.sh@10 -- # set +x 00:07:57.692 ************************************ 00:07:57.692 END TEST filesystem_in_capsule_xfs 00:07:57.692 ************************************ 00:07:57.692 07:27:01 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:57.692 07:27:01 -- target/filesystem.sh@93 -- # sync 00:07:57.692 07:27:01 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:57.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.692 07:27:01 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:57.692 07:27:01 -- common/autotest_common.sh@1198 -- # local i=0 00:07:57.692 07:27:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:57.692 07:27:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.952 07:27:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:57.952 07:27:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.952 07:27:01 -- common/autotest_common.sh@1210 -- # return 0 00:07:57.952 07:27:01 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.952 07:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.952 07:27:01 -- common/autotest_common.sh@10 -- # set +x 00:07:57.952 07:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.952 07:27:01 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:57.952 07:27:01 -- target/filesystem.sh@101 -- # killprocess 3976451 00:07:57.952 07:27:01 -- common/autotest_common.sh@926 -- # '[' -z 3976451 ']' 00:07:57.952 07:27:01 -- common/autotest_common.sh@930 -- # kill -0 3976451 00:07:57.952 07:27:01 -- common/autotest_common.sh@931 -- # uname 00:07:57.952 07:27:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:57.952 07:27:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3976451 00:07:57.952 07:27:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:57.952 07:27:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:57.952 07:27:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3976451' 00:07:57.952 killing process with pid 3976451 00:07:57.952 07:27:01 -- common/autotest_common.sh@945 -- # kill 3976451 00:07:57.952 07:27:01 -- common/autotest_common.sh@950 -- # wait 3976451 00:07:58.211 07:27:02 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:58.212 00:07:58.212 real 0m21.387s 00:07:58.212 user 1m24.225s 00:07:58.212 sys 0m1.476s 00:07:58.212 07:27:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.212 07:27:02 -- common/autotest_common.sh@10 -- # set +x 00:07:58.212 ************************************ 00:07:58.212 END TEST nvmf_filesystem_in_capsule 00:07:58.212 ************************************ 00:07:58.212 07:27:02 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:58.212 07:27:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:58.212 07:27:02 -- nvmf/common.sh@116 -- # sync 00:07:58.212 07:27:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:58.212 07:27:02 -- nvmf/common.sh@119 -- # set +e 00:07:58.212 07:27:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:58.212 07:27:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:58.212 rmmod nvme_tcp 00:07:58.212 rmmod nvme_fabrics 00:07:58.212 rmmod nvme_keyring 00:07:58.472 07:27:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:58.472 07:27:02 -- nvmf/common.sh@123 -- # set -e 00:07:58.472 07:27:02 -- nvmf/common.sh@124 -- # return 0 00:07:58.472 07:27:02 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:58.472 07:27:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:58.472 07:27:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:58.472 07:27:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:58.472 07:27:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:58.472 07:27:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:58.472 07:27:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.472 07:27:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.472 07:27:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.378 07:27:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:00.378 00:08:00.378 real 0m48.035s 00:08:00.378 user 2m39.787s 00:08:00.378 sys 0m7.033s 00:08:00.378 07:27:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.378 07:27:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.378 ************************************ 00:08:00.378 END TEST nvmf_filesystem 00:08:00.378 ************************************ 00:08:00.378 07:27:04 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:00.378 07:27:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:00.378 07:27:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.378 07:27:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.378 ************************************ 00:08:00.378 START TEST nvmf_discovery 00:08:00.378 ************************************ 00:08:00.378 07:27:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:00.638 * Looking for test storage... 00:08:00.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.638 07:27:04 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.638 07:27:04 -- nvmf/common.sh@7 -- # uname -s 00:08:00.638 07:27:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.638 07:27:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.638 07:27:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.638 07:27:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.638 07:27:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.638 07:27:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.638 07:27:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.638 07:27:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.638 07:27:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.638 07:27:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.638 07:27:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:00.638 07:27:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:00.638 07:27:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.638 07:27:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.638 07:27:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.638 07:27:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.638 07:27:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.638 07:27:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.638 07:27:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.638 07:27:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.638 07:27:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.638 07:27:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.638 07:27:04 -- paths/export.sh@5 -- # export PATH 00:08:00.638 07:27:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.638 07:27:04 -- nvmf/common.sh@46 -- # : 0 00:08:00.638 07:27:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:00.638 07:27:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:00.638 07:27:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:00.638 07:27:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.638 07:27:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.638 07:27:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:00.639 07:27:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:00.639 07:27:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:00.639 07:27:04 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:00.639 07:27:04 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:00.639 07:27:04 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:00.639 07:27:04 -- target/discovery.sh@15 -- # hash nvme 00:08:00.639 07:27:04 -- target/discovery.sh@20 -- # nvmftestinit 00:08:00.639 07:27:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:00.639 07:27:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.639 07:27:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:00.639 07:27:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:00.639 07:27:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:00.639 07:27:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.639 07:27:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.639 07:27:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.639 07:27:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:00.639 07:27:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:00.639 07:27:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:00.639 07:27:04 -- common/autotest_common.sh@10 -- # set +x 00:08:05.916 07:27:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:05.916 07:27:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:05.916 07:27:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:05.916 07:27:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:05.916 07:27:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:05.916 07:27:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:05.916 07:27:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:05.916 07:27:09 -- nvmf/common.sh@294 -- # net_devs=() 00:08:05.916 07:27:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:05.916 07:27:09 -- nvmf/common.sh@295 -- # e810=() 00:08:05.916 07:27:09 -- nvmf/common.sh@295 -- # local -ga e810 00:08:05.916 07:27:09 -- nvmf/common.sh@296 -- # x722=() 00:08:05.916 07:27:09 -- nvmf/common.sh@296 -- # local -ga x722 00:08:05.916 07:27:09 -- nvmf/common.sh@297 -- # mlx=() 00:08:05.916 07:27:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:05.916 07:27:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.916 07:27:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.916 07:27:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.916 07:27:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.916 07:27:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.916 07:27:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.916 07:27:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.916 07:27:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.916 07:27:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.916 07:27:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.916 07:27:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.916 07:27:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:05.916 07:27:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:05.916 07:27:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:05.916 07:27:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:05.916 07:27:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:05.916 07:27:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:05.916 07:27:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:05.916 07:27:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:05.916 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:05.916 07:27:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:05.916 07:27:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:05.916 07:27:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.916 07:27:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.916 07:27:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:05.916 07:27:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:05.916 07:27:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:05.916 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:05.916 07:27:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:05.916 07:27:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:05.916 07:27:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.916 07:27:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.916 07:27:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:05.916 07:27:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:05.916 07:27:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:05.916 07:27:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:05.916 07:27:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:05.916 07:27:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.916 07:27:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:05.916 07:27:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.916 07:27:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:05.916 Found net devices under 0000:af:00.0: cvl_0_0 00:08:05.916 07:27:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.916 07:27:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:05.916 07:27:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.916 07:27:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:05.916 07:27:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.916 07:27:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:05.917 Found net devices under 0000:af:00.1: cvl_0_1 00:08:05.917 07:27:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.917 07:27:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:05.917 07:27:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:05.917 07:27:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:05.917 07:27:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:05.917 07:27:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:05.917 07:27:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.917 07:27:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.917 07:27:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.917 07:27:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:05.917 07:27:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.917 07:27:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.917 07:27:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:05.917 07:27:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.917 07:27:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.917 07:27:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:05.917 07:27:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:05.917 07:27:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.917 07:27:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.917 07:27:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.917 07:27:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.917 07:27:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:05.917 07:27:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.917 07:27:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.917 07:27:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.917 07:27:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:05.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:08:05.917 00:08:05.917 --- 10.0.0.2 ping statistics --- 00:08:05.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.917 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:08:05.917 07:27:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:08:05.917 00:08:05.917 --- 10.0.0.1 ping statistics --- 00:08:05.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.917 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:05.917 07:27:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.917 07:27:09 -- nvmf/common.sh@410 -- # return 0 00:08:05.917 07:27:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:05.917 07:27:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.917 07:27:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:05.917 07:27:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:05.917 07:27:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.917 07:27:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:05.917 07:27:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:05.917 07:27:09 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:05.917 07:27:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:05.917 07:27:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:05.917 07:27:09 -- common/autotest_common.sh@10 -- # set +x 00:08:05.917 07:27:09 -- nvmf/common.sh@469 -- # nvmfpid=3984000 00:08:05.917 07:27:09 -- nvmf/common.sh@470 -- # waitforlisten 3984000 00:08:05.917 07:27:09 -- common/autotest_common.sh@819 -- # '[' -z 3984000 ']' 00:08:05.917 07:27:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.917 07:27:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:05.917 07:27:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.917 07:27:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:05.917 07:27:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:05.917 07:27:09 -- common/autotest_common.sh@10 -- # set +x 00:08:05.917 [2024-10-07 07:27:09.367705] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:05.917 [2024-10-07 07:27:09.367747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.917 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.917 [2024-10-07 07:27:09.427310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.917 [2024-10-07 07:27:09.504754] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:05.917 [2024-10-07 07:27:09.504860] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.917 [2024-10-07 07:27:09.504868] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.917 [2024-10-07 07:27:09.504875] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.917 [2024-10-07 07:27:09.504918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.917 [2024-10-07 07:27:09.505017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.917 [2024-10-07 07:27:09.505105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.917 [2024-10-07 07:27:09.505107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.485 07:27:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:06.485 07:27:10 -- common/autotest_common.sh@852 -- # return 0 00:08:06.485 07:27:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:06.485 07:27:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:06.485 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.485 07:27:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.485 07:27:10 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:06.485 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.485 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 [2024-10-07 07:27:10.235348] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@26 -- # seq 1 4 00:08:06.486 07:27:10 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:06.486 07:27:10 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 Null1 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 [2024-10-07 07:27:10.284809] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:06.486 07:27:10 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 Null2 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:06.486 07:27:10 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 Null3 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:06.486 07:27:10 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 Null4 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:06.486 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.486 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.486 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.486 07:27:10 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:08:06.744 00:08:06.745 Discovery Log Number of Records 6, Generation counter 6 00:08:06.745 =====Discovery Log Entry 0====== 00:08:06.745 trtype: tcp 00:08:06.745 adrfam: ipv4 00:08:06.745 subtype: current discovery subsystem 00:08:06.745 treq: not required 00:08:06.745 portid: 0 00:08:06.745 trsvcid: 4420 00:08:06.745 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:06.745 traddr: 10.0.0.2 00:08:06.745 eflags: explicit discovery connections, duplicate discovery information 00:08:06.745 sectype: none 00:08:06.745 =====Discovery Log Entry 1====== 00:08:06.745 trtype: tcp 00:08:06.745 adrfam: ipv4 00:08:06.745 subtype: nvme subsystem 00:08:06.745 treq: not required 00:08:06.745 portid: 0 00:08:06.745 trsvcid: 4420 00:08:06.745 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:06.745 traddr: 10.0.0.2 00:08:06.745 eflags: none 00:08:06.745 sectype: none 00:08:06.745 =====Discovery Log Entry 2====== 00:08:06.745 trtype: tcp 00:08:06.745 adrfam: ipv4 00:08:06.745 subtype: nvme subsystem 00:08:06.745 treq: not required 00:08:06.745 portid: 0 00:08:06.745 trsvcid: 4420 00:08:06.745 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:06.745 traddr: 10.0.0.2 00:08:06.745 eflags: none 00:08:06.745 sectype: none 00:08:06.745 =====Discovery Log Entry 3====== 00:08:06.745 trtype: tcp 00:08:06.745 adrfam: ipv4 00:08:06.745 subtype: nvme subsystem 00:08:06.745 treq: not required 00:08:06.745 portid: 0 00:08:06.745 trsvcid: 4420 00:08:06.745 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:06.745 traddr: 10.0.0.2 00:08:06.745 eflags: none 00:08:06.745 sectype: none 00:08:06.745 =====Discovery Log Entry 4====== 00:08:06.745 trtype: tcp 00:08:06.745 adrfam: ipv4 00:08:06.745 subtype: nvme subsystem 00:08:06.745 treq: not required 00:08:06.745 portid: 0 00:08:06.745 trsvcid: 4420 00:08:06.745 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:06.745 traddr: 10.0.0.2 00:08:06.745 eflags: none 00:08:06.745 sectype: none 00:08:06.745 =====Discovery Log Entry 5====== 00:08:06.745 trtype: tcp 00:08:06.745 adrfam: ipv4 00:08:06.745 subtype: discovery subsystem referral 00:08:06.745 treq: not required 00:08:06.745 portid: 0 00:08:06.745 trsvcid: 4430 00:08:06.745 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:06.745 traddr: 10.0.0.2 00:08:06.745 eflags: none 00:08:06.745 sectype: none 00:08:06.745 07:27:10 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:06.745 Perform nvmf subsystem discovery via RPC 00:08:06.745 07:27:10 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:06.745 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.745 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.745 [2024-10-07 07:27:10.537468] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:06.745 [ 00:08:06.745 { 00:08:06.745 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:06.745 "subtype": "Discovery", 00:08:06.745 "listen_addresses": [ 00:08:06.745 { 00:08:06.745 "transport": "TCP", 00:08:06.745 "trtype": "TCP", 00:08:06.745 "adrfam": "IPv4", 00:08:06.745 "traddr": "10.0.0.2", 00:08:06.745 "trsvcid": "4420" 00:08:06.745 } 00:08:06.745 ], 00:08:06.745 "allow_any_host": true, 00:08:06.745 "hosts": [] 00:08:06.745 }, 00:08:06.745 { 00:08:06.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:06.745 "subtype": "NVMe", 00:08:06.745 "listen_addresses": [ 00:08:06.745 { 00:08:06.745 "transport": "TCP", 00:08:06.745 "trtype": "TCP", 00:08:06.745 "adrfam": "IPv4", 00:08:06.745 "traddr": "10.0.0.2", 00:08:06.745 "trsvcid": "4420" 00:08:06.745 } 00:08:06.745 ], 00:08:06.745 "allow_any_host": true, 00:08:06.745 "hosts": [], 00:08:06.745 "serial_number": "SPDK00000000000001", 00:08:06.745 "model_number": "SPDK bdev Controller", 00:08:06.745 "max_namespaces": 32, 00:08:06.745 "min_cntlid": 1, 00:08:06.745 "max_cntlid": 65519, 00:08:06.745 "namespaces": [ 00:08:06.745 { 00:08:06.745 "nsid": 1, 00:08:06.745 "bdev_name": "Null1", 00:08:06.745 "name": "Null1", 00:08:06.745 "nguid": "68011E3A75AA4D90B8E140C7FE0751CE", 00:08:06.745 "uuid": "68011e3a-75aa-4d90-b8e1-40c7fe0751ce" 00:08:06.745 } 00:08:06.745 ] 00:08:06.745 }, 00:08:06.745 { 00:08:06.745 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:06.745 "subtype": "NVMe", 00:08:06.745 "listen_addresses": [ 00:08:06.745 { 00:08:06.745 "transport": "TCP", 00:08:06.745 "trtype": "TCP", 00:08:06.745 "adrfam": "IPv4", 00:08:06.745 "traddr": "10.0.0.2", 00:08:06.745 "trsvcid": "4420" 00:08:06.745 } 00:08:06.745 ], 00:08:06.745 "allow_any_host": true, 00:08:06.745 "hosts": [], 00:08:06.745 "serial_number": "SPDK00000000000002", 00:08:06.745 "model_number": "SPDK bdev Controller", 00:08:06.745 "max_namespaces": 32, 00:08:06.745 "min_cntlid": 1, 00:08:06.745 "max_cntlid": 65519, 00:08:06.745 "namespaces": [ 00:08:06.745 { 00:08:06.745 "nsid": 1, 00:08:06.745 "bdev_name": "Null2", 00:08:06.745 "name": "Null2", 00:08:06.745 "nguid": "35D58BC8DEEB40B190D2362B47F8AE52", 00:08:06.745 "uuid": "35d58bc8-deeb-40b1-90d2-362b47f8ae52" 00:08:06.745 } 00:08:06.745 ] 00:08:06.745 }, 00:08:06.745 { 00:08:06.745 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:06.745 "subtype": "NVMe", 00:08:06.745 "listen_addresses": [ 00:08:06.745 { 00:08:06.745 "transport": "TCP", 00:08:06.745 "trtype": "TCP", 00:08:06.745 "adrfam": "IPv4", 00:08:06.745 "traddr": "10.0.0.2", 00:08:06.745 "trsvcid": "4420" 00:08:06.745 } 00:08:06.745 ], 00:08:06.745 "allow_any_host": true, 00:08:06.745 "hosts": [], 00:08:06.745 "serial_number": "SPDK00000000000003", 00:08:06.745 "model_number": "SPDK bdev Controller", 00:08:06.745 "max_namespaces": 32, 00:08:06.745 "min_cntlid": 1, 00:08:06.745 "max_cntlid": 65519, 00:08:06.745 "namespaces": [ 00:08:06.745 { 00:08:06.745 "nsid": 1, 00:08:06.745 "bdev_name": "Null3", 00:08:06.745 "name": "Null3", 00:08:06.745 "nguid": "73E85E0FCCF54F61A8852060B7F2FC1D", 00:08:06.745 "uuid": "73e85e0f-ccf5-4f61-a885-2060b7f2fc1d" 00:08:06.745 } 00:08:06.746 ] 00:08:06.746 }, 00:08:06.746 { 00:08:06.746 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:06.746 "subtype": "NVMe", 00:08:06.746 "listen_addresses": [ 00:08:06.746 { 00:08:06.746 "transport": "TCP", 00:08:06.746 "trtype": "TCP", 00:08:06.746 "adrfam": "IPv4", 00:08:06.746 "traddr": "10.0.0.2", 00:08:06.746 "trsvcid": "4420" 00:08:06.746 } 00:08:06.746 ], 00:08:06.746 "allow_any_host": true, 00:08:06.746 "hosts": [], 00:08:06.746 "serial_number": "SPDK00000000000004", 00:08:06.746 "model_number": "SPDK bdev Controller", 00:08:06.746 "max_namespaces": 32, 00:08:06.746 "min_cntlid": 1, 00:08:06.746 "max_cntlid": 65519, 00:08:06.746 "namespaces": [ 00:08:06.746 { 00:08:06.746 "nsid": 1, 00:08:06.746 "bdev_name": "Null4", 00:08:06.746 "name": "Null4", 00:08:06.746 "nguid": "AEAF4CE3043645B6BF92912B7A032DC6", 00:08:06.746 "uuid": "aeaf4ce3-0436-45b6-bf92-912b7a032dc6" 00:08:06.746 } 00:08:06.746 ] 00:08:06.746 } 00:08:06.746 ] 00:08:06.746 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.746 07:27:10 -- target/discovery.sh@42 -- # seq 1 4 00:08:06.746 07:27:10 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:06.746 07:27:10 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:06.746 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.746 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.746 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.746 07:27:10 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:06.746 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.746 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.746 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.746 07:27:10 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:06.746 07:27:10 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:06.746 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.746 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.746 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.746 07:27:10 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:06.746 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.746 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.746 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.746 07:27:10 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:06.746 07:27:10 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:06.746 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.746 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.746 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.746 07:27:10 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:06.746 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.746 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.746 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.746 07:27:10 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:06.746 07:27:10 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:06.746 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.746 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.746 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.746 07:27:10 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:06.746 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.746 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.746 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.746 07:27:10 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:06.746 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.746 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.746 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.746 07:27:10 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:06.746 07:27:10 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:06.746 07:27:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.746 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:08:06.746 07:27:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.746 07:27:10 -- target/discovery.sh@49 -- # check_bdevs= 00:08:06.746 07:27:10 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:06.746 07:27:10 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:06.746 07:27:10 -- target/discovery.sh@57 -- # nvmftestfini 00:08:06.746 07:27:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:06.746 07:27:10 -- nvmf/common.sh@116 -- # sync 00:08:06.746 07:27:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:06.746 07:27:10 -- nvmf/common.sh@119 -- # set +e 00:08:06.746 07:27:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:06.746 07:27:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:06.746 rmmod nvme_tcp 00:08:06.746 rmmod nvme_fabrics 00:08:06.746 rmmod nvme_keyring 00:08:07.005 07:27:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:07.005 07:27:10 -- nvmf/common.sh@123 -- # set -e 00:08:07.005 07:27:10 -- nvmf/common.sh@124 -- # return 0 00:08:07.005 07:27:10 -- nvmf/common.sh@477 -- # '[' -n 3984000 ']' 00:08:07.005 07:27:10 -- nvmf/common.sh@478 -- # killprocess 3984000 00:08:07.005 07:27:10 -- common/autotest_common.sh@926 -- # '[' -z 3984000 ']' 00:08:07.005 07:27:10 -- common/autotest_common.sh@930 -- # kill -0 3984000 00:08:07.005 07:27:10 -- common/autotest_common.sh@931 -- # uname 00:08:07.005 07:27:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:07.005 07:27:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3984000 00:08:07.005 07:27:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:07.005 07:27:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:07.005 07:27:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3984000' 00:08:07.005 killing process with pid 3984000 00:08:07.005 07:27:10 -- common/autotest_common.sh@945 -- # kill 3984000 00:08:07.005 [2024-10-07 07:27:10.789381] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:07.005 07:27:10 -- common/autotest_common.sh@950 -- # wait 3984000 00:08:07.264 07:27:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:07.264 07:27:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:07.264 07:27:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:07.264 07:27:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.264 07:27:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:07.264 07:27:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.264 07:27:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.264 07:27:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.165 07:27:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:09.165 00:08:09.165 real 0m8.756s 00:08:09.165 user 0m7.418s 00:08:09.165 sys 0m4.059s 00:08:09.165 07:27:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.165 07:27:13 -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 ************************************ 00:08:09.165 END TEST nvmf_discovery 00:08:09.165 ************************************ 00:08:09.165 07:27:13 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:09.165 07:27:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:09.165 07:27:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.165 07:27:13 -- common/autotest_common.sh@10 -- # set +x 00:08:09.165 ************************************ 00:08:09.165 START TEST nvmf_referrals 00:08:09.165 ************************************ 00:08:09.165 07:27:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:09.424 * Looking for test storage... 00:08:09.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.424 07:27:13 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.424 07:27:13 -- nvmf/common.sh@7 -- # uname -s 00:08:09.424 07:27:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.424 07:27:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.424 07:27:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.424 07:27:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.424 07:27:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.424 07:27:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.424 07:27:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.424 07:27:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.424 07:27:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.424 07:27:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.424 07:27:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:09.424 07:27:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:09.424 07:27:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.424 07:27:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.424 07:27:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.424 07:27:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.424 07:27:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.424 07:27:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.424 07:27:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.424 07:27:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.424 07:27:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.424 07:27:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.424 07:27:13 -- paths/export.sh@5 -- # export PATH 00:08:09.424 07:27:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.424 07:27:13 -- nvmf/common.sh@46 -- # : 0 00:08:09.424 07:27:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:09.424 07:27:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:09.424 07:27:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:09.424 07:27:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.424 07:27:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.424 07:27:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:09.424 07:27:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:09.424 07:27:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:09.424 07:27:13 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:09.424 07:27:13 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:09.424 07:27:13 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:09.424 07:27:13 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:09.424 07:27:13 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:09.424 07:27:13 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:09.424 07:27:13 -- target/referrals.sh@37 -- # nvmftestinit 00:08:09.424 07:27:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:09.424 07:27:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.424 07:27:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:09.424 07:27:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:09.424 07:27:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:09.424 07:27:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.424 07:27:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.424 07:27:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.424 07:27:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:09.424 07:27:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:09.424 07:27:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:09.424 07:27:13 -- common/autotest_common.sh@10 -- # set +x 00:08:14.698 07:27:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:14.698 07:27:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:14.698 07:27:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:14.698 07:27:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:14.698 07:27:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:14.698 07:27:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:14.698 07:27:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:14.698 07:27:18 -- nvmf/common.sh@294 -- # net_devs=() 00:08:14.698 07:27:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:14.698 07:27:18 -- nvmf/common.sh@295 -- # e810=() 00:08:14.698 07:27:18 -- nvmf/common.sh@295 -- # local -ga e810 00:08:14.698 07:27:18 -- nvmf/common.sh@296 -- # x722=() 00:08:14.698 07:27:18 -- nvmf/common.sh@296 -- # local -ga x722 00:08:14.698 07:27:18 -- nvmf/common.sh@297 -- # mlx=() 00:08:14.698 07:27:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:14.698 07:27:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.698 07:27:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.698 07:27:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.698 07:27:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.698 07:27:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.698 07:27:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.698 07:27:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.698 07:27:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.698 07:27:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.698 07:27:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.698 07:27:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.698 07:27:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:14.698 07:27:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:14.698 07:27:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:14.698 07:27:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:14.698 07:27:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:14.698 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:14.698 07:27:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:14.698 07:27:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:14.698 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:14.698 07:27:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:14.698 07:27:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:14.698 07:27:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.698 07:27:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:14.698 07:27:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.698 07:27:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:14.698 Found net devices under 0000:af:00.0: cvl_0_0 00:08:14.698 07:27:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.698 07:27:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:14.698 07:27:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.698 07:27:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:14.698 07:27:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.698 07:27:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:14.698 Found net devices under 0000:af:00.1: cvl_0_1 00:08:14.698 07:27:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.698 07:27:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:14.698 07:27:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:14.698 07:27:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:14.698 07:27:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.698 07:27:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.698 07:27:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.698 07:27:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:14.698 07:27:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.698 07:27:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.698 07:27:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:14.698 07:27:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.698 07:27:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.698 07:27:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:14.698 07:27:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:14.698 07:27:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.698 07:27:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.698 07:27:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.698 07:27:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.698 07:27:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:14.698 07:27:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.698 07:27:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.698 07:27:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.698 07:27:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:14.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:08:14.698 00:08:14.698 --- 10.0.0.2 ping statistics --- 00:08:14.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.698 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:08:14.698 07:27:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:08:14.698 00:08:14.698 --- 10.0.0.1 ping statistics --- 00:08:14.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.698 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:08:14.698 07:27:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.698 07:27:18 -- nvmf/common.sh@410 -- # return 0 00:08:14.698 07:27:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:14.698 07:27:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.698 07:27:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:14.698 07:27:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.698 07:27:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:14.698 07:27:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:14.698 07:27:18 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:14.698 07:27:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:14.698 07:27:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:14.698 07:27:18 -- common/autotest_common.sh@10 -- # set +x 00:08:14.698 07:27:18 -- nvmf/common.sh@469 -- # nvmfpid=3987819 00:08:14.698 07:27:18 -- nvmf/common.sh@470 -- # waitforlisten 3987819 00:08:14.698 07:27:18 -- common/autotest_common.sh@819 -- # '[' -z 3987819 ']' 00:08:14.699 07:27:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.699 07:27:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:14.699 07:27:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:14.699 07:27:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.699 07:27:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:14.699 07:27:18 -- common/autotest_common.sh@10 -- # set +x 00:08:14.699 [2024-10-07 07:27:18.447471] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:14.699 [2024-10-07 07:27:18.447515] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.699 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.699 [2024-10-07 07:27:18.506183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.699 [2024-10-07 07:27:18.582048] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.699 [2024-10-07 07:27:18.582159] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.699 [2024-10-07 07:27:18.582166] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.699 [2024-10-07 07:27:18.582173] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.699 [2024-10-07 07:27:18.582209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.699 [2024-10-07 07:27:18.582309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.699 [2024-10-07 07:27:18.582396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.699 [2024-10-07 07:27:18.582397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.636 07:27:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:15.636 07:27:19 -- common/autotest_common.sh@852 -- # return 0 00:08:15.636 07:27:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:15.636 07:27:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:15.636 07:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.636 07:27:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.636 07:27:19 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.636 07:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.636 07:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.636 [2024-10-07 07:27:19.314391] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.636 07:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.636 07:27:19 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:15.636 07:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.636 07:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.636 [2024-10-07 07:27:19.327812] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:15.636 07:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.636 07:27:19 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:15.636 07:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.636 07:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.636 07:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.636 07:27:19 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:15.636 07:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.636 07:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.636 07:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.636 07:27:19 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:15.636 07:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.636 07:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.636 07:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.636 07:27:19 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.636 07:27:19 -- target/referrals.sh@48 -- # jq length 00:08:15.636 07:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.636 07:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.636 07:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.636 07:27:19 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:15.636 07:27:19 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:15.636 07:27:19 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:15.636 07:27:19 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.636 07:27:19 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:15.636 07:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.636 07:27:19 -- target/referrals.sh@21 -- # sort 00:08:15.636 07:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.636 07:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.636 07:27:19 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:15.636 07:27:19 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:15.636 07:27:19 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:15.636 07:27:19 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:15.636 07:27:19 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:15.637 07:27:19 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.637 07:27:19 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:15.637 07:27:19 -- target/referrals.sh@26 -- # sort 00:08:15.637 07:27:19 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:15.637 07:27:19 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:15.637 07:27:19 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:15.637 07:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.637 07:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.637 07:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.637 07:27:19 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:15.637 07:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.637 07:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.637 07:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.637 07:27:19 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:15.637 07:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.637 07:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.896 07:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.896 07:27:19 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.896 07:27:19 -- target/referrals.sh@56 -- # jq length 00:08:15.896 07:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.896 07:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.896 07:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.896 07:27:19 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:15.896 07:27:19 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:15.896 07:27:19 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:15.896 07:27:19 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:15.896 07:27:19 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:15.896 07:27:19 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.896 07:27:19 -- target/referrals.sh@26 -- # sort 00:08:15.896 07:27:19 -- target/referrals.sh@26 -- # echo 00:08:15.896 07:27:19 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:15.896 07:27:19 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:15.896 07:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.896 07:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.896 07:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.896 07:27:19 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:15.896 07:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.896 07:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.896 07:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.896 07:27:19 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:15.896 07:27:19 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:15.896 07:27:19 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.896 07:27:19 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:15.896 07:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.896 07:27:19 -- target/referrals.sh@21 -- # sort 00:08:15.896 07:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:16.155 07:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.155 07:27:19 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:16.155 07:27:19 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:16.155 07:27:19 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:16.155 07:27:19 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:16.155 07:27:19 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:16.155 07:27:19 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.155 07:27:19 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:16.155 07:27:19 -- target/referrals.sh@26 -- # sort 00:08:16.155 07:27:20 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:16.414 07:27:20 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:16.414 07:27:20 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:16.414 07:27:20 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:16.414 07:27:20 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:16.414 07:27:20 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.414 07:27:20 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:16.414 07:27:20 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:16.414 07:27:20 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:16.414 07:27:20 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:16.414 07:27:20 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:16.414 07:27:20 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.414 07:27:20 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:16.674 07:27:20 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:16.674 07:27:20 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:16.674 07:27:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.674 07:27:20 -- common/autotest_common.sh@10 -- # set +x 00:08:16.674 07:27:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.674 07:27:20 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:16.674 07:27:20 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:16.674 07:27:20 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:16.674 07:27:20 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:16.674 07:27:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.674 07:27:20 -- target/referrals.sh@21 -- # sort 00:08:16.674 07:27:20 -- common/autotest_common.sh@10 -- # set +x 00:08:16.674 07:27:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.674 07:27:20 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:16.674 07:27:20 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:16.674 07:27:20 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:16.674 07:27:20 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:16.674 07:27:20 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:16.674 07:27:20 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.674 07:27:20 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:16.674 07:27:20 -- target/referrals.sh@26 -- # sort 00:08:16.674 07:27:20 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:16.674 07:27:20 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:16.674 07:27:20 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:16.674 07:27:20 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:16.674 07:27:20 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:16.674 07:27:20 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.674 07:27:20 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:16.933 07:27:20 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:16.933 07:27:20 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:16.933 07:27:20 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:16.933 07:27:20 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:16.933 07:27:20 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.933 07:27:20 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:17.192 07:27:20 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:17.192 07:27:20 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:17.192 07:27:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.192 07:27:20 -- common/autotest_common.sh@10 -- # set +x 00:08:17.192 07:27:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.192 07:27:20 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.192 07:27:20 -- target/referrals.sh@82 -- # jq length 00:08:17.192 07:27:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.192 07:27:20 -- common/autotest_common.sh@10 -- # set +x 00:08:17.192 07:27:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.192 07:27:21 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:17.192 07:27:21 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:17.192 07:27:21 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.192 07:27:21 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.192 07:27:21 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.192 07:27:21 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.192 07:27:21 -- target/referrals.sh@26 -- # sort 00:08:17.451 07:27:21 -- target/referrals.sh@26 -- # echo 00:08:17.451 07:27:21 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:17.451 07:27:21 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:17.451 07:27:21 -- target/referrals.sh@86 -- # nvmftestfini 00:08:17.451 07:27:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:17.451 07:27:21 -- nvmf/common.sh@116 -- # sync 00:08:17.452 07:27:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:17.452 07:27:21 -- nvmf/common.sh@119 -- # set +e 00:08:17.452 07:27:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:17.452 07:27:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:17.452 rmmod nvme_tcp 00:08:17.452 rmmod nvme_fabrics 00:08:17.452 rmmod nvme_keyring 00:08:17.452 07:27:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:17.452 07:27:21 -- nvmf/common.sh@123 -- # set -e 00:08:17.452 07:27:21 -- nvmf/common.sh@124 -- # return 0 00:08:17.452 07:27:21 -- nvmf/common.sh@477 -- # '[' -n 3987819 ']' 00:08:17.452 07:27:21 -- nvmf/common.sh@478 -- # killprocess 3987819 00:08:17.452 07:27:21 -- common/autotest_common.sh@926 -- # '[' -z 3987819 ']' 00:08:17.452 07:27:21 -- common/autotest_common.sh@930 -- # kill -0 3987819 00:08:17.452 07:27:21 -- common/autotest_common.sh@931 -- # uname 00:08:17.452 07:27:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:17.452 07:27:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3987819 00:08:17.452 07:27:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:17.452 07:27:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:17.452 07:27:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3987819' 00:08:17.452 killing process with pid 3987819 00:08:17.452 07:27:21 -- common/autotest_common.sh@945 -- # kill 3987819 00:08:17.452 07:27:21 -- common/autotest_common.sh@950 -- # wait 3987819 00:08:17.711 07:27:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:17.711 07:27:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:17.711 07:27:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:17.711 07:27:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:17.711 07:27:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:17.711 07:27:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.711 07:27:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.711 07:27:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.619 07:27:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:19.619 00:08:19.619 real 0m10.464s 00:08:19.619 user 0m13.908s 00:08:19.619 sys 0m4.659s 00:08:19.619 07:27:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.619 07:27:23 -- common/autotest_common.sh@10 -- # set +x 00:08:19.619 ************************************ 00:08:19.619 END TEST nvmf_referrals 00:08:19.619 ************************************ 00:08:19.879 07:27:23 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:19.879 07:27:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:19.879 07:27:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.879 07:27:23 -- common/autotest_common.sh@10 -- # set +x 00:08:19.879 ************************************ 00:08:19.879 START TEST nvmf_connect_disconnect 00:08:19.879 ************************************ 00:08:19.879 07:27:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:19.879 * Looking for test storage... 00:08:19.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.879 07:27:23 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.879 07:27:23 -- nvmf/common.sh@7 -- # uname -s 00:08:19.879 07:27:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.879 07:27:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.879 07:27:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.879 07:27:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.879 07:27:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.879 07:27:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.879 07:27:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.879 07:27:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.879 07:27:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.879 07:27:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.879 07:27:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:19.879 07:27:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:19.879 07:27:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.879 07:27:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.879 07:27:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.879 07:27:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.879 07:27:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.879 07:27:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.879 07:27:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.879 07:27:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.879 07:27:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.879 07:27:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.879 07:27:23 -- paths/export.sh@5 -- # export PATH 00:08:19.879 07:27:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.879 07:27:23 -- nvmf/common.sh@46 -- # : 0 00:08:19.879 07:27:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:19.879 07:27:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:19.879 07:27:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:19.879 07:27:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.879 07:27:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.879 07:27:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:19.879 07:27:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:19.879 07:27:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:19.879 07:27:23 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:19.879 07:27:23 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:19.879 07:27:23 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:19.879 07:27:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:19.879 07:27:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.879 07:27:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:19.879 07:27:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:19.879 07:27:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:19.879 07:27:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.879 07:27:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.879 07:27:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.879 07:27:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:19.879 07:27:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:19.879 07:27:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:19.879 07:27:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.155 07:27:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:25.155 07:27:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:25.155 07:27:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:25.155 07:27:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:25.155 07:27:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:25.155 07:27:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:25.155 07:27:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:25.155 07:27:28 -- nvmf/common.sh@294 -- # net_devs=() 00:08:25.155 07:27:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:25.155 07:27:28 -- nvmf/common.sh@295 -- # e810=() 00:08:25.155 07:27:28 -- nvmf/common.sh@295 -- # local -ga e810 00:08:25.155 07:27:28 -- nvmf/common.sh@296 -- # x722=() 00:08:25.155 07:27:28 -- nvmf/common.sh@296 -- # local -ga x722 00:08:25.155 07:27:28 -- nvmf/common.sh@297 -- # mlx=() 00:08:25.155 07:27:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:25.155 07:27:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.155 07:27:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.155 07:27:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.155 07:27:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.155 07:27:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.155 07:27:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.155 07:27:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.155 07:27:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.155 07:27:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.155 07:27:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.155 07:27:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.155 07:27:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:25.155 07:27:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:25.155 07:27:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:25.155 07:27:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:25.155 07:27:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:25.155 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:25.155 07:27:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:25.155 07:27:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:25.155 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:25.155 07:27:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:25.155 07:27:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:25.155 07:27:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.155 07:27:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:25.155 07:27:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.155 07:27:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:25.155 Found net devices under 0000:af:00.0: cvl_0_0 00:08:25.155 07:27:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.155 07:27:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:25.155 07:27:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.155 07:27:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:25.155 07:27:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.155 07:27:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:25.155 Found net devices under 0000:af:00.1: cvl_0_1 00:08:25.155 07:27:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.155 07:27:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:25.155 07:27:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:25.155 07:27:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:25.155 07:27:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:25.155 07:27:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.155 07:27:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.155 07:27:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.155 07:27:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:25.155 07:27:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.155 07:27:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.155 07:27:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:25.155 07:27:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.156 07:27:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.156 07:27:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:25.156 07:27:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:25.156 07:27:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.156 07:27:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.156 07:27:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.156 07:27:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.156 07:27:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:25.156 07:27:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.416 07:27:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.416 07:27:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.416 07:27:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:25.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:08:25.416 00:08:25.416 --- 10.0.0.2 ping statistics --- 00:08:25.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.416 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:08:25.416 07:27:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:08:25.416 00:08:25.416 --- 10.0.0.1 ping statistics --- 00:08:25.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.416 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:08:25.416 07:27:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.416 07:27:29 -- nvmf/common.sh@410 -- # return 0 00:08:25.416 07:27:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:25.416 07:27:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.416 07:27:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:25.416 07:27:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:25.416 07:27:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.416 07:27:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:25.416 07:27:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:25.416 07:27:29 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:25.416 07:27:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:25.416 07:27:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:25.416 07:27:29 -- common/autotest_common.sh@10 -- # set +x 00:08:25.416 07:27:29 -- nvmf/common.sh@469 -- # nvmfpid=3991853 00:08:25.416 07:27:29 -- nvmf/common.sh@470 -- # waitforlisten 3991853 00:08:25.416 07:27:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.416 07:27:29 -- common/autotest_common.sh@819 -- # '[' -z 3991853 ']' 00:08:25.416 07:27:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.416 07:27:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:25.416 07:27:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.416 07:27:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:25.416 07:27:29 -- common/autotest_common.sh@10 -- # set +x 00:08:25.416 [2024-10-07 07:27:29.270120] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:25.416 [2024-10-07 07:27:29.270162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.416 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.416 [2024-10-07 07:27:29.331883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.676 [2024-10-07 07:27:29.407924] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:25.676 [2024-10-07 07:27:29.408036] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.676 [2024-10-07 07:27:29.408045] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.676 [2024-10-07 07:27:29.408051] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.676 [2024-10-07 07:27:29.408109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.676 [2024-10-07 07:27:29.408121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.676 [2024-10-07 07:27:29.408218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.676 [2024-10-07 07:27:29.408219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.246 07:27:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:26.246 07:27:30 -- common/autotest_common.sh@852 -- # return 0 00:08:26.246 07:27:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:26.246 07:27:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:26.246 07:27:30 -- common/autotest_common.sh@10 -- # set +x 00:08:26.246 07:27:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.246 07:27:30 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:26.246 07:27:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.246 07:27:30 -- common/autotest_common.sh@10 -- # set +x 00:08:26.246 [2024-10-07 07:27:30.120293] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.246 07:27:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.246 07:27:30 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:26.246 07:27:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.246 07:27:30 -- common/autotest_common.sh@10 -- # set +x 00:08:26.246 07:27:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.246 07:27:30 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:26.247 07:27:30 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:26.247 07:27:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.247 07:27:30 -- common/autotest_common.sh@10 -- # set +x 00:08:26.247 07:27:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.247 07:27:30 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:26.247 07:27:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.247 07:27:30 -- common/autotest_common.sh@10 -- # set +x 00:08:26.247 07:27:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.247 07:27:30 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.247 07:27:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.247 07:27:30 -- common/autotest_common.sh@10 -- # set +x 00:08:26.247 [2024-10-07 07:27:30.176092] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.247 07:27:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.247 07:27:30 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:26.247 07:27:30 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:26.247 07:27:30 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:26.247 07:27:30 -- target/connect_disconnect.sh@34 -- # set +x 00:08:28.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.873 07:31:22 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:18.873 07:31:22 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:18.873 07:31:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:18.873 07:31:22 -- nvmf/common.sh@116 -- # sync 00:12:18.873 07:31:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:18.873 07:31:22 -- nvmf/common.sh@119 -- # set +e 00:12:18.873 07:31:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:18.873 07:31:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:18.873 rmmod nvme_tcp 00:12:18.873 rmmod nvme_fabrics 00:12:18.873 rmmod nvme_keyring 00:12:18.873 07:31:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:18.873 07:31:22 -- nvmf/common.sh@123 -- # set -e 00:12:18.873 07:31:22 -- nvmf/common.sh@124 -- # return 0 00:12:18.873 07:31:22 -- nvmf/common.sh@477 -- # '[' -n 3991853 ']' 00:12:18.873 07:31:22 -- nvmf/common.sh@478 -- # killprocess 3991853 00:12:18.873 07:31:22 -- common/autotest_common.sh@926 -- # '[' -z 3991853 ']' 00:12:18.873 07:31:22 -- common/autotest_common.sh@930 -- # kill -0 3991853 00:12:18.873 07:31:22 -- common/autotest_common.sh@931 -- # uname 00:12:18.873 07:31:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:18.873 07:31:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3991853 00:12:18.873 07:31:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:18.873 07:31:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:18.873 07:31:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3991853' 00:12:18.873 killing process with pid 3991853 00:12:18.873 07:31:22 -- common/autotest_common.sh@945 -- # kill 3991853 00:12:18.873 07:31:22 -- common/autotest_common.sh@950 -- # wait 3991853 00:12:18.873 07:31:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:18.873 07:31:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:18.873 07:31:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:18.873 07:31:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.873 07:31:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:18.873 07:31:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.873 07:31:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.873 07:31:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.407 07:31:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:21.407 00:12:21.407 real 4m1.229s 00:12:21.407 user 15m23.674s 00:12:21.407 sys 0m24.496s 00:12:21.407 07:31:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.407 07:31:24 -- common/autotest_common.sh@10 -- # set +x 00:12:21.407 ************************************ 00:12:21.407 END TEST nvmf_connect_disconnect 00:12:21.407 ************************************ 00:12:21.407 07:31:24 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:21.407 07:31:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:21.407 07:31:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:21.407 07:31:24 -- common/autotest_common.sh@10 -- # set +x 00:12:21.407 ************************************ 00:12:21.408 START TEST nvmf_multitarget 00:12:21.408 ************************************ 00:12:21.408 07:31:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:21.408 * Looking for test storage... 00:12:21.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.408 07:31:24 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.408 07:31:24 -- nvmf/common.sh@7 -- # uname -s 00:12:21.408 07:31:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.408 07:31:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.408 07:31:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.408 07:31:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.408 07:31:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.408 07:31:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.408 07:31:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.408 07:31:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.408 07:31:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.408 07:31:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.408 07:31:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:21.408 07:31:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:21.408 07:31:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.408 07:31:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.408 07:31:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.408 07:31:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.408 07:31:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.408 07:31:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.408 07:31:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.408 07:31:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.408 07:31:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.408 07:31:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.408 07:31:24 -- paths/export.sh@5 -- # export PATH 00:12:21.408 07:31:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.408 07:31:24 -- nvmf/common.sh@46 -- # : 0 00:12:21.408 07:31:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:21.408 07:31:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:21.408 07:31:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:21.408 07:31:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.408 07:31:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.408 07:31:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:21.408 07:31:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:21.408 07:31:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:21.408 07:31:24 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:21.408 07:31:24 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:21.408 07:31:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:21.408 07:31:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.408 07:31:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:21.408 07:31:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:21.408 07:31:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:21.408 07:31:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.408 07:31:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.408 07:31:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.408 07:31:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:21.408 07:31:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:21.408 07:31:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:21.408 07:31:24 -- common/autotest_common.sh@10 -- # set +x 00:12:26.676 07:31:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:26.676 07:31:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:26.676 07:31:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:26.676 07:31:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:26.676 07:31:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:26.676 07:31:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:26.676 07:31:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:26.676 07:31:29 -- nvmf/common.sh@294 -- # net_devs=() 00:12:26.676 07:31:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:26.676 07:31:29 -- nvmf/common.sh@295 -- # e810=() 00:12:26.676 07:31:29 -- nvmf/common.sh@295 -- # local -ga e810 00:12:26.676 07:31:29 -- nvmf/common.sh@296 -- # x722=() 00:12:26.676 07:31:29 -- nvmf/common.sh@296 -- # local -ga x722 00:12:26.676 07:31:29 -- nvmf/common.sh@297 -- # mlx=() 00:12:26.676 07:31:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:26.676 07:31:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.676 07:31:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.676 07:31:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.676 07:31:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.676 07:31:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.676 07:31:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.676 07:31:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.676 07:31:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.676 07:31:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.676 07:31:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.676 07:31:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.676 07:31:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:26.676 07:31:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:26.676 07:31:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:26.677 07:31:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:26.677 07:31:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:26.677 07:31:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:26.677 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:26.677 07:31:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:26.677 07:31:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:26.677 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:26.677 07:31:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:26.677 07:31:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:26.677 07:31:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.677 07:31:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:26.677 07:31:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.677 07:31:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:26.677 Found net devices under 0000:af:00.0: cvl_0_0 00:12:26.677 07:31:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.677 07:31:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:26.677 07:31:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.677 07:31:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:26.677 07:31:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.677 07:31:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:26.677 Found net devices under 0000:af:00.1: cvl_0_1 00:12:26.677 07:31:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.677 07:31:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:26.677 07:31:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:26.677 07:31:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:26.677 07:31:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.677 07:31:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.677 07:31:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.677 07:31:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:26.677 07:31:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.677 07:31:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.677 07:31:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:26.677 07:31:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.677 07:31:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.677 07:31:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:26.677 07:31:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:26.677 07:31:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.677 07:31:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.677 07:31:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.677 07:31:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.677 07:31:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:26.677 07:31:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.677 07:31:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.677 07:31:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.677 07:31:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:26.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:12:26.677 00:12:26.677 --- 10.0.0.2 ping statistics --- 00:12:26.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.677 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:12:26.677 07:31:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:12:26.677 00:12:26.677 --- 10.0.0.1 ping statistics --- 00:12:26.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.677 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:12:26.677 07:31:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.677 07:31:29 -- nvmf/common.sh@410 -- # return 0 00:12:26.677 07:31:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:26.677 07:31:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.677 07:31:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:26.677 07:31:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.677 07:31:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:26.677 07:31:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:26.677 07:31:29 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:26.677 07:31:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:26.677 07:31:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:26.677 07:31:29 -- common/autotest_common.sh@10 -- # set +x 00:12:26.677 07:31:29 -- nvmf/common.sh@469 -- # nvmfpid=4035595 00:12:26.677 07:31:29 -- nvmf/common.sh@470 -- # waitforlisten 4035595 00:12:26.677 07:31:29 -- common/autotest_common.sh@819 -- # '[' -z 4035595 ']' 00:12:26.677 07:31:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.677 07:31:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:26.677 07:31:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.677 07:31:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.677 07:31:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:26.677 07:31:29 -- common/autotest_common.sh@10 -- # set +x 00:12:26.677 [2024-10-07 07:31:30.023138] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:26.677 [2024-10-07 07:31:30.023192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.677 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.677 [2024-10-07 07:31:30.080966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.677 [2024-10-07 07:31:30.157732] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:26.677 [2024-10-07 07:31:30.157845] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.677 [2024-10-07 07:31:30.157853] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.677 [2024-10-07 07:31:30.157860] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.677 [2024-10-07 07:31:30.157894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.677 [2024-10-07 07:31:30.157993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.677 [2024-10-07 07:31:30.158083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.677 [2024-10-07 07:31:30.158085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.936 07:31:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:26.936 07:31:30 -- common/autotest_common.sh@852 -- # return 0 00:12:26.936 07:31:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:26.936 07:31:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:26.936 07:31:30 -- common/autotest_common.sh@10 -- # set +x 00:12:26.936 07:31:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.936 07:31:30 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:26.936 07:31:30 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:26.936 07:31:30 -- target/multitarget.sh@21 -- # jq length 00:12:27.195 07:31:30 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:27.195 07:31:30 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:27.195 "nvmf_tgt_1" 00:12:27.195 07:31:31 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:27.454 "nvmf_tgt_2" 00:12:27.454 07:31:31 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:27.454 07:31:31 -- target/multitarget.sh@28 -- # jq length 00:12:27.454 07:31:31 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:27.454 07:31:31 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:27.454 true 00:12:27.712 07:31:31 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:27.712 true 00:12:27.712 07:31:31 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:27.712 07:31:31 -- target/multitarget.sh@35 -- # jq length 00:12:27.712 07:31:31 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:27.712 07:31:31 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:27.712 07:31:31 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:27.712 07:31:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:27.712 07:31:31 -- nvmf/common.sh@116 -- # sync 00:12:27.712 07:31:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:27.712 07:31:31 -- nvmf/common.sh@119 -- # set +e 00:12:27.712 07:31:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:27.712 07:31:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:27.712 rmmod nvme_tcp 00:12:27.712 rmmod nvme_fabrics 00:12:27.712 rmmod nvme_keyring 00:12:27.970 07:31:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:27.970 07:31:31 -- nvmf/common.sh@123 -- # set -e 00:12:27.970 07:31:31 -- nvmf/common.sh@124 -- # return 0 00:12:27.970 07:31:31 -- nvmf/common.sh@477 -- # '[' -n 4035595 ']' 00:12:27.971 07:31:31 -- nvmf/common.sh@478 -- # killprocess 4035595 00:12:27.971 07:31:31 -- common/autotest_common.sh@926 -- # '[' -z 4035595 ']' 00:12:27.971 07:31:31 -- common/autotest_common.sh@930 -- # kill -0 4035595 00:12:27.971 07:31:31 -- common/autotest_common.sh@931 -- # uname 00:12:27.971 07:31:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:27.971 07:31:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4035595 00:12:27.971 07:31:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:27.971 07:31:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:27.971 07:31:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4035595' 00:12:27.971 killing process with pid 4035595 00:12:27.971 07:31:31 -- common/autotest_common.sh@945 -- # kill 4035595 00:12:27.971 07:31:31 -- common/autotest_common.sh@950 -- # wait 4035595 00:12:27.971 07:31:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:27.971 07:31:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:27.971 07:31:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:27.971 07:31:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.971 07:31:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:27.971 07:31:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.971 07:31:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.971 07:31:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.509 07:31:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:30.509 00:12:30.509 real 0m9.112s 00:12:30.509 user 0m9.193s 00:12:30.509 sys 0m4.151s 00:12:30.509 07:31:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.509 07:31:33 -- common/autotest_common.sh@10 -- # set +x 00:12:30.509 ************************************ 00:12:30.509 END TEST nvmf_multitarget 00:12:30.509 ************************************ 00:12:30.509 07:31:34 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:30.509 07:31:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:30.509 07:31:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:30.509 07:31:34 -- common/autotest_common.sh@10 -- # set +x 00:12:30.509 ************************************ 00:12:30.509 START TEST nvmf_rpc 00:12:30.509 ************************************ 00:12:30.509 07:31:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:30.509 * Looking for test storage... 00:12:30.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.509 07:31:34 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.509 07:31:34 -- nvmf/common.sh@7 -- # uname -s 00:12:30.509 07:31:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.509 07:31:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.509 07:31:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.509 07:31:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.509 07:31:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.509 07:31:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.509 07:31:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.509 07:31:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.509 07:31:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.509 07:31:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.509 07:31:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:30.509 07:31:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:30.509 07:31:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.509 07:31:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.509 07:31:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.509 07:31:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.509 07:31:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.509 07:31:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.509 07:31:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.509 07:31:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.509 07:31:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.509 07:31:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.509 07:31:34 -- paths/export.sh@5 -- # export PATH 00:12:30.509 07:31:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.509 07:31:34 -- nvmf/common.sh@46 -- # : 0 00:12:30.509 07:31:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:30.509 07:31:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:30.509 07:31:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:30.509 07:31:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.509 07:31:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.509 07:31:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:30.509 07:31:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:30.509 07:31:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:30.509 07:31:34 -- target/rpc.sh@11 -- # loops=5 00:12:30.509 07:31:34 -- target/rpc.sh@23 -- # nvmftestinit 00:12:30.509 07:31:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:30.509 07:31:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.509 07:31:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:30.509 07:31:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:30.509 07:31:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:30.509 07:31:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.509 07:31:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.509 07:31:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.509 07:31:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:30.509 07:31:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:30.509 07:31:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:30.509 07:31:34 -- common/autotest_common.sh@10 -- # set +x 00:12:35.783 07:31:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:35.783 07:31:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:35.783 07:31:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:35.783 07:31:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:35.783 07:31:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:35.783 07:31:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:35.783 07:31:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:35.783 07:31:39 -- nvmf/common.sh@294 -- # net_devs=() 00:12:35.783 07:31:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:35.783 07:31:39 -- nvmf/common.sh@295 -- # e810=() 00:12:35.783 07:31:39 -- nvmf/common.sh@295 -- # local -ga e810 00:12:35.783 07:31:39 -- nvmf/common.sh@296 -- # x722=() 00:12:35.783 07:31:39 -- nvmf/common.sh@296 -- # local -ga x722 00:12:35.783 07:31:39 -- nvmf/common.sh@297 -- # mlx=() 00:12:35.783 07:31:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:35.783 07:31:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.783 07:31:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.783 07:31:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.783 07:31:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.783 07:31:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.783 07:31:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.783 07:31:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.783 07:31:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.783 07:31:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.783 07:31:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.783 07:31:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.783 07:31:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:35.783 07:31:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:35.783 07:31:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:35.783 07:31:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:35.783 07:31:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:35.783 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:35.783 07:31:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:35.783 07:31:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:35.783 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:35.783 07:31:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:35.783 07:31:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:35.783 07:31:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.783 07:31:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:35.783 07:31:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.783 07:31:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:35.783 Found net devices under 0000:af:00.0: cvl_0_0 00:12:35.783 07:31:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.783 07:31:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:35.783 07:31:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.783 07:31:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:35.783 07:31:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.783 07:31:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:35.783 Found net devices under 0000:af:00.1: cvl_0_1 00:12:35.783 07:31:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.783 07:31:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:35.783 07:31:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:35.783 07:31:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:35.783 07:31:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.783 07:31:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.783 07:31:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.783 07:31:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:35.783 07:31:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.783 07:31:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.783 07:31:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:35.783 07:31:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.783 07:31:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.783 07:31:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:35.783 07:31:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:35.783 07:31:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.783 07:31:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.783 07:31:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.783 07:31:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.783 07:31:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:35.783 07:31:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.783 07:31:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.783 07:31:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.783 07:31:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:35.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:12:35.783 00:12:35.783 --- 10.0.0.2 ping statistics --- 00:12:35.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.783 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:12:35.783 07:31:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:12:35.783 00:12:35.783 --- 10.0.0.1 ping statistics --- 00:12:35.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.783 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:35.783 07:31:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.783 07:31:39 -- nvmf/common.sh@410 -- # return 0 00:12:35.783 07:31:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:35.783 07:31:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.783 07:31:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:35.783 07:31:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.783 07:31:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:35.783 07:31:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:35.783 07:31:39 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:35.783 07:31:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:35.783 07:31:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:35.783 07:31:39 -- common/autotest_common.sh@10 -- # set +x 00:12:35.783 07:31:39 -- nvmf/common.sh@469 -- # nvmfpid=4039272 00:12:35.783 07:31:39 -- nvmf/common.sh@470 -- # waitforlisten 4039272 00:12:35.783 07:31:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.783 07:31:39 -- common/autotest_common.sh@819 -- # '[' -z 4039272 ']' 00:12:35.783 07:31:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.783 07:31:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:35.783 07:31:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.783 07:31:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:35.783 07:31:39 -- common/autotest_common.sh@10 -- # set +x 00:12:35.783 [2024-10-07 07:31:39.643137] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:35.783 [2024-10-07 07:31:39.643179] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.783 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.784 [2024-10-07 07:31:39.702199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.042 [2024-10-07 07:31:39.778465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:36.042 [2024-10-07 07:31:39.778575] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.042 [2024-10-07 07:31:39.778582] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.042 [2024-10-07 07:31:39.778589] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.042 [2024-10-07 07:31:39.778630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.042 [2024-10-07 07:31:39.778749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.042 [2024-10-07 07:31:39.778837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.042 [2024-10-07 07:31:39.778838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.609 07:31:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:36.609 07:31:40 -- common/autotest_common.sh@852 -- # return 0 00:12:36.609 07:31:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:36.609 07:31:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:36.609 07:31:40 -- common/autotest_common.sh@10 -- # set +x 00:12:36.609 07:31:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.609 07:31:40 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:36.609 07:31:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.609 07:31:40 -- common/autotest_common.sh@10 -- # set +x 00:12:36.609 07:31:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.609 07:31:40 -- target/rpc.sh@26 -- # stats='{ 00:12:36.609 "tick_rate": 2100000000, 00:12:36.609 "poll_groups": [ 00:12:36.609 { 00:12:36.609 "name": "nvmf_tgt_poll_group_0", 00:12:36.609 "admin_qpairs": 0, 00:12:36.609 "io_qpairs": 0, 00:12:36.609 "current_admin_qpairs": 0, 00:12:36.609 "current_io_qpairs": 0, 00:12:36.609 "pending_bdev_io": 0, 00:12:36.609 "completed_nvme_io": 0, 00:12:36.609 "transports": [] 00:12:36.609 }, 00:12:36.609 { 00:12:36.609 "name": "nvmf_tgt_poll_group_1", 00:12:36.609 "admin_qpairs": 0, 00:12:36.609 "io_qpairs": 0, 00:12:36.609 "current_admin_qpairs": 0, 00:12:36.609 "current_io_qpairs": 0, 00:12:36.609 "pending_bdev_io": 0, 00:12:36.609 "completed_nvme_io": 0, 00:12:36.609 "transports": [] 00:12:36.609 }, 00:12:36.609 { 00:12:36.609 "name": "nvmf_tgt_poll_group_2", 00:12:36.609 "admin_qpairs": 0, 00:12:36.609 "io_qpairs": 0, 00:12:36.609 "current_admin_qpairs": 0, 00:12:36.609 "current_io_qpairs": 0, 00:12:36.609 "pending_bdev_io": 0, 00:12:36.609 "completed_nvme_io": 0, 00:12:36.609 "transports": [] 00:12:36.609 }, 00:12:36.609 { 00:12:36.609 "name": "nvmf_tgt_poll_group_3", 00:12:36.609 "admin_qpairs": 0, 00:12:36.609 "io_qpairs": 0, 00:12:36.609 "current_admin_qpairs": 0, 00:12:36.610 "current_io_qpairs": 0, 00:12:36.610 "pending_bdev_io": 0, 00:12:36.610 "completed_nvme_io": 0, 00:12:36.610 "transports": [] 00:12:36.610 } 00:12:36.610 ] 00:12:36.610 }' 00:12:36.610 07:31:40 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:36.610 07:31:40 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:36.610 07:31:40 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:36.610 07:31:40 -- target/rpc.sh@15 -- # wc -l 00:12:36.610 07:31:40 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:36.610 07:31:40 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:36.868 07:31:40 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:36.868 07:31:40 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.868 07:31:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.868 07:31:40 -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 [2024-10-07 07:31:40.624776] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.868 07:31:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.868 07:31:40 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:36.868 07:31:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.868 07:31:40 -- common/autotest_common.sh@10 -- # set +x 00:12:36.868 07:31:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.868 07:31:40 -- target/rpc.sh@33 -- # stats='{ 00:12:36.868 "tick_rate": 2100000000, 00:12:36.868 "poll_groups": [ 00:12:36.868 { 00:12:36.868 "name": "nvmf_tgt_poll_group_0", 00:12:36.868 "admin_qpairs": 0, 00:12:36.868 "io_qpairs": 0, 00:12:36.868 "current_admin_qpairs": 0, 00:12:36.868 "current_io_qpairs": 0, 00:12:36.868 "pending_bdev_io": 0, 00:12:36.868 "completed_nvme_io": 0, 00:12:36.868 "transports": [ 00:12:36.868 { 00:12:36.868 "trtype": "TCP" 00:12:36.868 } 00:12:36.868 ] 00:12:36.868 }, 00:12:36.868 { 00:12:36.868 "name": "nvmf_tgt_poll_group_1", 00:12:36.868 "admin_qpairs": 0, 00:12:36.868 "io_qpairs": 0, 00:12:36.868 "current_admin_qpairs": 0, 00:12:36.868 "current_io_qpairs": 0, 00:12:36.868 "pending_bdev_io": 0, 00:12:36.868 "completed_nvme_io": 0, 00:12:36.869 "transports": [ 00:12:36.869 { 00:12:36.869 "trtype": "TCP" 00:12:36.869 } 00:12:36.869 ] 00:12:36.869 }, 00:12:36.869 { 00:12:36.869 "name": "nvmf_tgt_poll_group_2", 00:12:36.869 "admin_qpairs": 0, 00:12:36.869 "io_qpairs": 0, 00:12:36.869 "current_admin_qpairs": 0, 00:12:36.869 "current_io_qpairs": 0, 00:12:36.869 "pending_bdev_io": 0, 00:12:36.869 "completed_nvme_io": 0, 00:12:36.869 "transports": [ 00:12:36.869 { 00:12:36.869 "trtype": "TCP" 00:12:36.869 } 00:12:36.869 ] 00:12:36.869 }, 00:12:36.869 { 00:12:36.869 "name": "nvmf_tgt_poll_group_3", 00:12:36.869 "admin_qpairs": 0, 00:12:36.869 "io_qpairs": 0, 00:12:36.869 "current_admin_qpairs": 0, 00:12:36.869 "current_io_qpairs": 0, 00:12:36.869 "pending_bdev_io": 0, 00:12:36.869 "completed_nvme_io": 0, 00:12:36.869 "transports": [ 00:12:36.869 { 00:12:36.869 "trtype": "TCP" 00:12:36.869 } 00:12:36.869 ] 00:12:36.869 } 00:12:36.869 ] 00:12:36.869 }' 00:12:36.869 07:31:40 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:36.869 07:31:40 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:36.869 07:31:40 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:36.869 07:31:40 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:36.869 07:31:40 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:36.869 07:31:40 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:36.869 07:31:40 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:36.869 07:31:40 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:36.869 07:31:40 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:36.869 07:31:40 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:36.869 07:31:40 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:36.869 07:31:40 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:36.869 07:31:40 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:36.869 07:31:40 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:36.869 07:31:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.869 07:31:40 -- common/autotest_common.sh@10 -- # set +x 00:12:36.869 Malloc1 00:12:36.869 07:31:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.869 07:31:40 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:36.869 07:31:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.869 07:31:40 -- common/autotest_common.sh@10 -- # set +x 00:12:36.869 07:31:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.869 07:31:40 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.869 07:31:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.869 07:31:40 -- common/autotest_common.sh@10 -- # set +x 00:12:36.869 07:31:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.869 07:31:40 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:36.869 07:31:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.869 07:31:40 -- common/autotest_common.sh@10 -- # set +x 00:12:36.869 07:31:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.869 07:31:40 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.869 07:31:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.869 07:31:40 -- common/autotest_common.sh@10 -- # set +x 00:12:36.869 [2024-10-07 07:31:40.792619] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.869 07:31:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.869 07:31:40 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:36.869 07:31:40 -- common/autotest_common.sh@640 -- # local es=0 00:12:36.869 07:31:40 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:36.869 07:31:40 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:36.869 07:31:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:36.869 07:31:40 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:36.869 07:31:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:36.869 07:31:40 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:36.869 07:31:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:36.869 07:31:40 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:36.869 07:31:40 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:36.869 07:31:40 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:36.869 [2024-10-07 07:31:40.821096] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:12:37.128 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:37.128 could not add new controller: failed to write to nvme-fabrics device 00:12:37.128 07:31:40 -- common/autotest_common.sh@643 -- # es=1 00:12:37.128 07:31:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:37.128 07:31:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:37.128 07:31:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:37.128 07:31:40 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:37.128 07:31:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.128 07:31:40 -- common/autotest_common.sh@10 -- # set +x 00:12:37.128 07:31:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.128 07:31:40 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.065 07:31:41 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.065 07:31:41 -- common/autotest_common.sh@1177 -- # local i=0 00:12:38.065 07:31:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.065 07:31:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:38.065 07:31:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:40.135 07:31:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:40.135 07:31:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:40.135 07:31:43 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.135 07:31:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:40.135 07:31:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.135 07:31:43 -- common/autotest_common.sh@1187 -- # return 0 00:12:40.135 07:31:43 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.135 07:31:44 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.135 07:31:44 -- common/autotest_common.sh@1198 -- # local i=0 00:12:40.135 07:31:44 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:40.135 07:31:44 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.135 07:31:44 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:40.135 07:31:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.393 07:31:44 -- common/autotest_common.sh@1210 -- # return 0 00:12:40.393 07:31:44 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:40.393 07:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.393 07:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:40.393 07:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.393 07:31:44 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.393 07:31:44 -- common/autotest_common.sh@640 -- # local es=0 00:12:40.393 07:31:44 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.393 07:31:44 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:40.393 07:31:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:40.393 07:31:44 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:40.393 07:31:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:40.393 07:31:44 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:40.393 07:31:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:40.393 07:31:44 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:40.393 07:31:44 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:40.393 07:31:44 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.393 [2024-10-07 07:31:44.146390] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:12:40.393 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:40.393 could not add new controller: failed to write to nvme-fabrics device 00:12:40.393 07:31:44 -- common/autotest_common.sh@643 -- # es=1 00:12:40.393 07:31:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:40.393 07:31:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:40.393 07:31:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:40.393 07:31:44 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:40.393 07:31:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.393 07:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:40.393 07:31:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.393 07:31:44 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.771 07:31:45 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.771 07:31:45 -- common/autotest_common.sh@1177 -- # local i=0 00:12:41.771 07:31:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.771 07:31:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:41.771 07:31:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:43.677 07:31:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:43.677 07:31:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:43.677 07:31:47 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.677 07:31:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:43.677 07:31:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.677 07:31:47 -- common/autotest_common.sh@1187 -- # return 0 00:12:43.677 07:31:47 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.677 07:31:47 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.677 07:31:47 -- common/autotest_common.sh@1198 -- # local i=0 00:12:43.677 07:31:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:43.677 07:31:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.677 07:31:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:43.677 07:31:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.677 07:31:47 -- common/autotest_common.sh@1210 -- # return 0 00:12:43.677 07:31:47 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.677 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.677 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:12:43.677 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.677 07:31:47 -- target/rpc.sh@81 -- # seq 1 5 00:12:43.677 07:31:47 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.677 07:31:47 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.677 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.678 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:12:43.678 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.678 07:31:47 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.678 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.678 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:12:43.678 [2024-10-07 07:31:47.506692] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.678 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.678 07:31:47 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.678 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.678 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:12:43.678 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.678 07:31:47 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.678 07:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.678 07:31:47 -- common/autotest_common.sh@10 -- # set +x 00:12:43.678 07:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.678 07:31:47 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.054 07:31:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.054 07:31:48 -- common/autotest_common.sh@1177 -- # local i=0 00:12:45.054 07:31:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.054 07:31:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:45.054 07:31:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:46.961 07:31:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:46.961 07:31:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:46.961 07:31:50 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.961 07:31:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:46.961 07:31:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.961 07:31:50 -- common/autotest_common.sh@1187 -- # return 0 00:12:46.961 07:31:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.961 07:31:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.961 07:31:50 -- common/autotest_common.sh@1198 -- # local i=0 00:12:46.961 07:31:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:46.961 07:31:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.961 07:31:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:46.961 07:31:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.961 07:31:50 -- common/autotest_common.sh@1210 -- # return 0 00:12:46.961 07:31:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.961 07:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.961 07:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:46.961 07:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.961 07:31:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.961 07:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.961 07:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:46.961 07:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.961 07:31:50 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.961 07:31:50 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.961 07:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.961 07:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:46.961 07:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.961 07:31:50 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.961 07:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.961 07:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:46.961 [2024-10-07 07:31:50.781692] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.961 07:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.961 07:31:50 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.961 07:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.961 07:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:46.961 07:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.961 07:31:50 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.961 07:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.961 07:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:46.961 07:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.961 07:31:50 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.341 07:31:51 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.341 07:31:51 -- common/autotest_common.sh@1177 -- # local i=0 00:12:48.341 07:31:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.341 07:31:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:48.341 07:31:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:50.247 07:31:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:50.247 07:31:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:50.247 07:31:53 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.247 07:31:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:50.247 07:31:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.247 07:31:53 -- common/autotest_common.sh@1187 -- # return 0 00:12:50.247 07:31:53 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.247 07:31:54 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.247 07:31:54 -- common/autotest_common.sh@1198 -- # local i=0 00:12:50.247 07:31:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:50.247 07:31:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.247 07:31:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:50.247 07:31:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.247 07:31:54 -- common/autotest_common.sh@1210 -- # return 0 00:12:50.247 07:31:54 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.247 07:31:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.247 07:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:50.247 07:31:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.247 07:31:54 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.247 07:31:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.247 07:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:50.247 07:31:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.247 07:31:54 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:50.247 07:31:54 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:50.247 07:31:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.247 07:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:50.247 07:31:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.247 07:31:54 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.247 07:31:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.247 07:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:50.247 [2024-10-07 07:31:54.088924] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.248 07:31:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.248 07:31:54 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:50.248 07:31:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.248 07:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:50.248 07:31:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.248 07:31:54 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:50.248 07:31:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.248 07:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:50.248 07:31:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.248 07:31:54 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.626 07:31:55 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.626 07:31:55 -- common/autotest_common.sh@1177 -- # local i=0 00:12:51.626 07:31:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.626 07:31:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:51.626 07:31:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:53.531 07:31:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:53.532 07:31:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:53.532 07:31:57 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.532 07:31:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:53.532 07:31:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.532 07:31:57 -- common/autotest_common.sh@1187 -- # return 0 00:12:53.532 07:31:57 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.532 07:31:57 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.532 07:31:57 -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.532 07:31:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:53.532 07:31:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.532 07:31:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:53.532 07:31:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.532 07:31:57 -- common/autotest_common.sh@1210 -- # return 0 00:12:53.532 07:31:57 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.532 07:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.532 07:31:57 -- common/autotest_common.sh@10 -- # set +x 00:12:53.532 07:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.532 07:31:57 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.532 07:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.532 07:31:57 -- common/autotest_common.sh@10 -- # set +x 00:12:53.532 07:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.532 07:31:57 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:53.532 07:31:57 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.532 07:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.532 07:31:57 -- common/autotest_common.sh@10 -- # set +x 00:12:53.532 07:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.532 07:31:57 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.532 07:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.532 07:31:57 -- common/autotest_common.sh@10 -- # set +x 00:12:53.532 [2024-10-07 07:31:57.407952] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.532 07:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.532 07:31:57 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:53.532 07:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.532 07:31:57 -- common/autotest_common.sh@10 -- # set +x 00:12:53.532 07:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.532 07:31:57 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.532 07:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.532 07:31:57 -- common/autotest_common.sh@10 -- # set +x 00:12:53.532 07:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.532 07:31:57 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.910 07:31:58 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:54.910 07:31:58 -- common/autotest_common.sh@1177 -- # local i=0 00:12:54.910 07:31:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.910 07:31:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:54.910 07:31:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:56.817 07:32:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:56.817 07:32:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:56.817 07:32:00 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.817 07:32:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:56.817 07:32:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.817 07:32:00 -- common/autotest_common.sh@1187 -- # return 0 00:12:56.817 07:32:00 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.817 07:32:00 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.817 07:32:00 -- common/autotest_common.sh@1198 -- # local i=0 00:12:56.817 07:32:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:56.817 07:32:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.817 07:32:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:56.817 07:32:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.817 07:32:00 -- common/autotest_common.sh@1210 -- # return 0 00:12:56.817 07:32:00 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.817 07:32:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.817 07:32:00 -- common/autotest_common.sh@10 -- # set +x 00:12:56.817 07:32:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.817 07:32:00 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.817 07:32:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.817 07:32:00 -- common/autotest_common.sh@10 -- # set +x 00:12:56.817 07:32:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.817 07:32:00 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.817 07:32:00 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.817 07:32:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.817 07:32:00 -- common/autotest_common.sh@10 -- # set +x 00:12:56.817 07:32:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.817 07:32:00 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.817 07:32:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.817 07:32:00 -- common/autotest_common.sh@10 -- # set +x 00:12:56.817 [2024-10-07 07:32:00.675347] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.817 07:32:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.817 07:32:00 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.817 07:32:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.817 07:32:00 -- common/autotest_common.sh@10 -- # set +x 00:12:56.817 07:32:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.817 07:32:00 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.817 07:32:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.817 07:32:00 -- common/autotest_common.sh@10 -- # set +x 00:12:56.817 07:32:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.817 07:32:00 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.196 07:32:01 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.196 07:32:01 -- common/autotest_common.sh@1177 -- # local i=0 00:12:58.196 07:32:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.196 07:32:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:58.196 07:32:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:00.099 07:32:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:00.099 07:32:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:00.099 07:32:03 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.099 07:32:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:00.099 07:32:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.099 07:32:03 -- common/autotest_common.sh@1187 -- # return 0 00:13:00.099 07:32:03 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.099 07:32:03 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.099 07:32:03 -- common/autotest_common.sh@1198 -- # local i=0 00:13:00.099 07:32:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:00.099 07:32:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.099 07:32:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:00.099 07:32:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.099 07:32:03 -- common/autotest_common.sh@1210 -- # return 0 00:13:00.099 07:32:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.099 07:32:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.099 07:32:03 -- common/autotest_common.sh@10 -- # set +x 00:13:00.099 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.099 07:32:04 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.099 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.099 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.099 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.099 07:32:04 -- target/rpc.sh@99 -- # seq 1 5 00:13:00.099 07:32:04 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.099 07:32:04 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.099 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.099 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.099 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.099 07:32:04 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.099 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.100 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.100 [2024-10-07 07:32:04.033180] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.100 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.100 07:32:04 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.100 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.100 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.100 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.100 07:32:04 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.100 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.100 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.100 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.100 07:32:04 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.100 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.100 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.100 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.100 07:32:04 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.100 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.100 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.100 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.100 07:32:04 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.358 07:32:04 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 [2024-10-07 07:32:04.081292] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.358 07:32:04 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 [2024-10-07 07:32:04.129406] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.358 07:32:04 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 [2024-10-07 07:32:04.177557] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.358 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.358 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.358 07:32:04 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.359 07:32:04 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.359 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.359 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.359 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.359 07:32:04 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.359 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.359 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.359 [2024-10-07 07:32:04.225732] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.359 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.359 07:32:04 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.359 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.359 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.359 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.359 07:32:04 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.359 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.359 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.359 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.359 07:32:04 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.359 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.359 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.359 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.359 07:32:04 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.359 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.359 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.359 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.359 07:32:04 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:00.359 07:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.359 07:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.359 07:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.359 07:32:04 -- target/rpc.sh@110 -- # stats='{ 00:13:00.359 "tick_rate": 2100000000, 00:13:00.359 "poll_groups": [ 00:13:00.359 { 00:13:00.359 "name": "nvmf_tgt_poll_group_0", 00:13:00.359 "admin_qpairs": 2, 00:13:00.359 "io_qpairs": 168, 00:13:00.359 "current_admin_qpairs": 0, 00:13:00.359 "current_io_qpairs": 0, 00:13:00.359 "pending_bdev_io": 0, 00:13:00.359 "completed_nvme_io": 267, 00:13:00.359 "transports": [ 00:13:00.359 { 00:13:00.359 "trtype": "TCP" 00:13:00.359 } 00:13:00.359 ] 00:13:00.359 }, 00:13:00.359 { 00:13:00.359 "name": "nvmf_tgt_poll_group_1", 00:13:00.359 "admin_qpairs": 2, 00:13:00.359 "io_qpairs": 168, 00:13:00.359 "current_admin_qpairs": 0, 00:13:00.359 "current_io_qpairs": 0, 00:13:00.359 "pending_bdev_io": 0, 00:13:00.359 "completed_nvme_io": 219, 00:13:00.359 "transports": [ 00:13:00.359 { 00:13:00.359 "trtype": "TCP" 00:13:00.359 } 00:13:00.359 ] 00:13:00.359 }, 00:13:00.359 { 00:13:00.359 "name": "nvmf_tgt_poll_group_2", 00:13:00.359 "admin_qpairs": 1, 00:13:00.359 "io_qpairs": 168, 00:13:00.359 "current_admin_qpairs": 0, 00:13:00.359 "current_io_qpairs": 0, 00:13:00.359 "pending_bdev_io": 0, 00:13:00.359 "completed_nvme_io": 219, 00:13:00.359 "transports": [ 00:13:00.359 { 00:13:00.359 "trtype": "TCP" 00:13:00.359 } 00:13:00.359 ] 00:13:00.359 }, 00:13:00.359 { 00:13:00.359 "name": "nvmf_tgt_poll_group_3", 00:13:00.359 "admin_qpairs": 2, 00:13:00.359 "io_qpairs": 168, 00:13:00.359 "current_admin_qpairs": 0, 00:13:00.359 "current_io_qpairs": 0, 00:13:00.359 "pending_bdev_io": 0, 00:13:00.359 "completed_nvme_io": 317, 00:13:00.359 "transports": [ 00:13:00.359 { 00:13:00.359 "trtype": "TCP" 00:13:00.359 } 00:13:00.359 ] 00:13:00.359 } 00:13:00.359 ] 00:13:00.359 }' 00:13:00.359 07:32:04 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:00.359 07:32:04 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:00.359 07:32:04 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:00.359 07:32:04 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.359 07:32:04 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:00.618 07:32:04 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:00.618 07:32:04 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:00.618 07:32:04 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:00.618 07:32:04 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.618 07:32:04 -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:00.618 07:32:04 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:00.618 07:32:04 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:00.618 07:32:04 -- target/rpc.sh@123 -- # nvmftestfini 00:13:00.618 07:32:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:00.618 07:32:04 -- nvmf/common.sh@116 -- # sync 00:13:00.618 07:32:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:00.618 07:32:04 -- nvmf/common.sh@119 -- # set +e 00:13:00.618 07:32:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:00.618 07:32:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:00.618 rmmod nvme_tcp 00:13:00.618 rmmod nvme_fabrics 00:13:00.618 rmmod nvme_keyring 00:13:00.618 07:32:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:00.618 07:32:04 -- nvmf/common.sh@123 -- # set -e 00:13:00.618 07:32:04 -- nvmf/common.sh@124 -- # return 0 00:13:00.618 07:32:04 -- nvmf/common.sh@477 -- # '[' -n 4039272 ']' 00:13:00.618 07:32:04 -- nvmf/common.sh@478 -- # killprocess 4039272 00:13:00.618 07:32:04 -- common/autotest_common.sh@926 -- # '[' -z 4039272 ']' 00:13:00.618 07:32:04 -- common/autotest_common.sh@930 -- # kill -0 4039272 00:13:00.618 07:32:04 -- common/autotest_common.sh@931 -- # uname 00:13:00.618 07:32:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:00.618 07:32:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4039272 00:13:00.618 07:32:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:00.618 07:32:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:00.618 07:32:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4039272' 00:13:00.618 killing process with pid 4039272 00:13:00.618 07:32:04 -- common/autotest_common.sh@945 -- # kill 4039272 00:13:00.618 07:32:04 -- common/autotest_common.sh@950 -- # wait 4039272 00:13:00.877 07:32:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:00.877 07:32:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:00.877 07:32:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:00.877 07:32:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:00.877 07:32:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:00.877 07:32:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.877 07:32:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.877 07:32:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.411 07:32:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:03.411 00:13:03.411 real 0m32.742s 00:13:03.412 user 1m41.016s 00:13:03.412 sys 0m5.942s 00:13:03.412 07:32:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.412 07:32:06 -- common/autotest_common.sh@10 -- # set +x 00:13:03.412 ************************************ 00:13:03.412 END TEST nvmf_rpc 00:13:03.412 ************************************ 00:13:03.412 07:32:06 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:03.412 07:32:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:03.412 07:32:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:03.412 07:32:06 -- common/autotest_common.sh@10 -- # set +x 00:13:03.412 ************************************ 00:13:03.412 START TEST nvmf_invalid 00:13:03.412 ************************************ 00:13:03.412 07:32:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:03.412 * Looking for test storage... 00:13:03.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.412 07:32:06 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.412 07:32:06 -- nvmf/common.sh@7 -- # uname -s 00:13:03.412 07:32:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.412 07:32:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.412 07:32:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.412 07:32:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.412 07:32:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.412 07:32:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.412 07:32:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.412 07:32:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.412 07:32:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.412 07:32:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.412 07:32:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:03.412 07:32:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:03.412 07:32:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.412 07:32:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.412 07:32:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.412 07:32:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.412 07:32:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.412 07:32:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.412 07:32:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.412 07:32:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.412 07:32:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.412 07:32:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.412 07:32:06 -- paths/export.sh@5 -- # export PATH 00:13:03.412 07:32:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.412 07:32:06 -- nvmf/common.sh@46 -- # : 0 00:13:03.412 07:32:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:03.412 07:32:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:03.412 07:32:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:03.412 07:32:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.412 07:32:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.412 07:32:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:03.412 07:32:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:03.412 07:32:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:03.412 07:32:06 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:03.412 07:32:06 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.412 07:32:06 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:03.412 07:32:06 -- target/invalid.sh@14 -- # target=foobar 00:13:03.412 07:32:06 -- target/invalid.sh@16 -- # RANDOM=0 00:13:03.412 07:32:06 -- target/invalid.sh@34 -- # nvmftestinit 00:13:03.412 07:32:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:03.412 07:32:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.412 07:32:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:03.412 07:32:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:03.412 07:32:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:03.412 07:32:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.412 07:32:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.412 07:32:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.412 07:32:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:03.412 07:32:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:03.412 07:32:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:03.412 07:32:06 -- common/autotest_common.sh@10 -- # set +x 00:13:08.684 07:32:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:08.684 07:32:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:08.684 07:32:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:08.684 07:32:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:08.684 07:32:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:08.684 07:32:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:08.684 07:32:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:08.684 07:32:12 -- nvmf/common.sh@294 -- # net_devs=() 00:13:08.684 07:32:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:08.684 07:32:12 -- nvmf/common.sh@295 -- # e810=() 00:13:08.684 07:32:12 -- nvmf/common.sh@295 -- # local -ga e810 00:13:08.684 07:32:12 -- nvmf/common.sh@296 -- # x722=() 00:13:08.684 07:32:12 -- nvmf/common.sh@296 -- # local -ga x722 00:13:08.684 07:32:12 -- nvmf/common.sh@297 -- # mlx=() 00:13:08.684 07:32:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:08.684 07:32:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.684 07:32:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.684 07:32:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.684 07:32:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.684 07:32:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.684 07:32:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.684 07:32:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.684 07:32:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.684 07:32:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.684 07:32:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.684 07:32:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.684 07:32:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:08.684 07:32:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:08.684 07:32:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:08.684 07:32:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:08.684 07:32:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:08.684 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:08.684 07:32:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:08.684 07:32:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:08.684 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:08.684 07:32:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:08.684 07:32:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:08.684 07:32:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.684 07:32:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:08.684 07:32:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.684 07:32:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:08.684 Found net devices under 0000:af:00.0: cvl_0_0 00:13:08.684 07:32:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.684 07:32:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:08.684 07:32:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.684 07:32:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:08.684 07:32:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.684 07:32:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:08.684 Found net devices under 0000:af:00.1: cvl_0_1 00:13:08.684 07:32:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.684 07:32:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:08.684 07:32:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:08.684 07:32:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:08.684 07:32:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:08.685 07:32:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.685 07:32:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.685 07:32:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.685 07:32:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:08.685 07:32:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.685 07:32:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.685 07:32:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:08.685 07:32:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.685 07:32:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.685 07:32:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:08.685 07:32:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:08.685 07:32:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.685 07:32:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.685 07:32:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.685 07:32:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.685 07:32:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:08.685 07:32:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.685 07:32:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.685 07:32:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.685 07:32:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:08.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:13:08.685 00:13:08.685 --- 10.0.0.2 ping statistics --- 00:13:08.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.685 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:13:08.685 07:32:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:13:08.685 00:13:08.685 --- 10.0.0.1 ping statistics --- 00:13:08.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.685 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:13:08.685 07:32:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.685 07:32:12 -- nvmf/common.sh@410 -- # return 0 00:13:08.685 07:32:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:08.685 07:32:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.685 07:32:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:08.685 07:32:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:08.685 07:32:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.685 07:32:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:08.685 07:32:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:08.685 07:32:12 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:08.685 07:32:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:08.685 07:32:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:08.685 07:32:12 -- common/autotest_common.sh@10 -- # set +x 00:13:08.685 07:32:12 -- nvmf/common.sh@469 -- # nvmfpid=4046944 00:13:08.685 07:32:12 -- nvmf/common.sh@470 -- # waitforlisten 4046944 00:13:08.685 07:32:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:08.685 07:32:12 -- common/autotest_common.sh@819 -- # '[' -z 4046944 ']' 00:13:08.685 07:32:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.685 07:32:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:08.685 07:32:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.685 07:32:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:08.685 07:32:12 -- common/autotest_common.sh@10 -- # set +x 00:13:08.685 [2024-10-07 07:32:12.414523] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:08.685 [2024-10-07 07:32:12.414566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.685 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.685 [2024-10-07 07:32:12.473318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:08.685 [2024-10-07 07:32:12.549803] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:08.685 [2024-10-07 07:32:12.549911] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.685 [2024-10-07 07:32:12.549919] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.685 [2024-10-07 07:32:12.549925] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.685 [2024-10-07 07:32:12.549967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.685 [2024-10-07 07:32:12.550071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.685 [2024-10-07 07:32:12.550125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.685 [2024-10-07 07:32:12.550127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.621 07:32:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:09.621 07:32:13 -- common/autotest_common.sh@852 -- # return 0 00:13:09.621 07:32:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:09.621 07:32:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:09.621 07:32:13 -- common/autotest_common.sh@10 -- # set +x 00:13:09.621 07:32:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.621 07:32:13 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:09.621 07:32:13 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9008 00:13:09.621 [2024-10-07 07:32:13.426839] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:09.621 07:32:13 -- target/invalid.sh@40 -- # out='request: 00:13:09.621 { 00:13:09.621 "nqn": "nqn.2016-06.io.spdk:cnode9008", 00:13:09.621 "tgt_name": "foobar", 00:13:09.621 "method": "nvmf_create_subsystem", 00:13:09.621 "req_id": 1 00:13:09.621 } 00:13:09.621 Got JSON-RPC error response 00:13:09.621 response: 00:13:09.621 { 00:13:09.621 "code": -32603, 00:13:09.621 "message": "Unable to find target foobar" 00:13:09.621 }' 00:13:09.621 07:32:13 -- target/invalid.sh@41 -- # [[ request: 00:13:09.621 { 00:13:09.621 "nqn": "nqn.2016-06.io.spdk:cnode9008", 00:13:09.621 "tgt_name": "foobar", 00:13:09.621 "method": "nvmf_create_subsystem", 00:13:09.621 "req_id": 1 00:13:09.621 } 00:13:09.621 Got JSON-RPC error response 00:13:09.621 response: 00:13:09.621 { 00:13:09.622 "code": -32603, 00:13:09.622 "message": "Unable to find target foobar" 00:13:09.622 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:09.622 07:32:13 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:09.622 07:32:13 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14367 00:13:09.881 [2024-10-07 07:32:13.619526] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14367: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:09.881 07:32:13 -- target/invalid.sh@45 -- # out='request: 00:13:09.881 { 00:13:09.881 "nqn": "nqn.2016-06.io.spdk:cnode14367", 00:13:09.881 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:09.881 "method": "nvmf_create_subsystem", 00:13:09.881 "req_id": 1 00:13:09.881 } 00:13:09.881 Got JSON-RPC error response 00:13:09.881 response: 00:13:09.881 { 00:13:09.881 "code": -32602, 00:13:09.881 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:09.881 }' 00:13:09.881 07:32:13 -- target/invalid.sh@46 -- # [[ request: 00:13:09.881 { 00:13:09.881 "nqn": "nqn.2016-06.io.spdk:cnode14367", 00:13:09.881 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:09.881 "method": "nvmf_create_subsystem", 00:13:09.881 "req_id": 1 00:13:09.881 } 00:13:09.881 Got JSON-RPC error response 00:13:09.881 response: 00:13:09.881 { 00:13:09.881 "code": -32602, 00:13:09.881 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:09.881 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:09.881 07:32:13 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:09.881 07:32:13 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12998 00:13:09.881 [2024-10-07 07:32:13.812147] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12998: invalid model number 'SPDK_Controller' 00:13:09.881 07:32:13 -- target/invalid.sh@50 -- # out='request: 00:13:09.881 { 00:13:09.881 "nqn": "nqn.2016-06.io.spdk:cnode12998", 00:13:09.881 "model_number": "SPDK_Controller\u001f", 00:13:09.881 "method": "nvmf_create_subsystem", 00:13:09.881 "req_id": 1 00:13:09.881 } 00:13:09.881 Got JSON-RPC error response 00:13:09.881 response: 00:13:09.881 { 00:13:09.881 "code": -32602, 00:13:09.881 "message": "Invalid MN SPDK_Controller\u001f" 00:13:09.881 }' 00:13:09.881 07:32:13 -- target/invalid.sh@51 -- # [[ request: 00:13:09.881 { 00:13:09.881 "nqn": "nqn.2016-06.io.spdk:cnode12998", 00:13:09.881 "model_number": "SPDK_Controller\u001f", 00:13:09.881 "method": "nvmf_create_subsystem", 00:13:09.881 "req_id": 1 00:13:09.881 } 00:13:09.881 Got JSON-RPC error response 00:13:09.881 response: 00:13:09.881 { 00:13:09.881 "code": -32602, 00:13:09.881 "message": "Invalid MN SPDK_Controller\u001f" 00:13:09.881 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:09.881 07:32:13 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:09.881 07:32:13 -- target/invalid.sh@19 -- # local length=21 ll 00:13:09.881 07:32:13 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:09.881 07:32:13 -- target/invalid.sh@21 -- # local chars 00:13:09.881 07:32:13 -- target/invalid.sh@22 -- # local string 00:13:09.881 07:32:13 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:09.881 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 101 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # string+=e 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 111 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # string+=o 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 105 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # string+=i 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 40 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # string+='(' 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 123 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # string+='{' 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 42 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # string+='*' 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 96 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # string+='`' 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 115 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # string+=s 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 33 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # string+='!' 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 63 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # string+='?' 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 35 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # string+='#' 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 52 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # string+=4 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 50 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # string+=2 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 43 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # string+=+ 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 97 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # string+=a 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.140 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # printf %x 125 00:13:10.140 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # string+='}' 00:13:10.141 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.141 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # printf %x 75 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # string+=K 00:13:10.141 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.141 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # printf %x 49 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # string+=1 00:13:10.141 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.141 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # printf %x 99 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # string+=c 00:13:10.141 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.141 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # printf %x 94 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # string+='^' 00:13:10.141 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.141 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # printf %x 33 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:10.141 07:32:13 -- target/invalid.sh@25 -- # string+='!' 00:13:10.141 07:32:13 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.141 07:32:13 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.141 07:32:13 -- target/invalid.sh@28 -- # [[ e == \- ]] 00:13:10.141 07:32:13 -- target/invalid.sh@31 -- # echo 'eoi({*`s!?#42+a}K1c^!' 00:13:10.141 07:32:13 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'eoi({*`s!?#42+a}K1c^!' nqn.2016-06.io.spdk:cnode21179 00:13:10.400 [2024-10-07 07:32:14.141252] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21179: invalid serial number 'eoi({*`s!?#42+a}K1c^!' 00:13:10.400 07:32:14 -- target/invalid.sh@54 -- # out='request: 00:13:10.400 { 00:13:10.400 "nqn": "nqn.2016-06.io.spdk:cnode21179", 00:13:10.400 "serial_number": "eoi({*`s!?#42+a}K1c^!", 00:13:10.400 "method": "nvmf_create_subsystem", 00:13:10.400 "req_id": 1 00:13:10.400 } 00:13:10.400 Got JSON-RPC error response 00:13:10.400 response: 00:13:10.400 { 00:13:10.400 "code": -32602, 00:13:10.400 "message": "Invalid SN eoi({*`s!?#42+a}K1c^!" 00:13:10.400 }' 00:13:10.400 07:32:14 -- target/invalid.sh@55 -- # [[ request: 00:13:10.400 { 00:13:10.400 "nqn": "nqn.2016-06.io.spdk:cnode21179", 00:13:10.400 "serial_number": "eoi({*`s!?#42+a}K1c^!", 00:13:10.400 "method": "nvmf_create_subsystem", 00:13:10.400 "req_id": 1 00:13:10.400 } 00:13:10.400 Got JSON-RPC error response 00:13:10.400 response: 00:13:10.400 { 00:13:10.400 "code": -32602, 00:13:10.400 "message": "Invalid SN eoi({*`s!?#42+a}K1c^!" 00:13:10.400 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:10.400 07:32:14 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:10.400 07:32:14 -- target/invalid.sh@19 -- # local length=41 ll 00:13:10.400 07:32:14 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:10.400 07:32:14 -- target/invalid.sh@21 -- # local chars 00:13:10.400 07:32:14 -- target/invalid.sh@22 -- # local string 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 117 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=u 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 114 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=r 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 77 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=M 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 84 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=T 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 112 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=p 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 89 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=Y 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 89 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=Y 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 122 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=z 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 81 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=Q 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 107 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=k 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 38 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+='&' 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 85 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=U 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 62 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+='>' 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 66 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=B 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 63 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+='?' 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 95 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=_ 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 45 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=- 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 93 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=']' 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 92 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+='\' 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 36 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+='$' 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 92 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+='\' 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 61 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+== 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 120 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=x 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 119 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=w 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 41 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=')' 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 83 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=S 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 110 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=n 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 42 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+='*' 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 92 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+='\' 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 98 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=b 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # printf %x 39 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:10.400 07:32:14 -- target/invalid.sh@25 -- # string+=\' 00:13:10.400 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # printf %x 95 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # string+=_ 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # printf %x 78 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # string+=N 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # printf %x 47 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # string+=/ 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # printf %x 124 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # string+='|' 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # printf %x 49 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # string+=1 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # printf %x 81 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # string+=Q 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # printf %x 76 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # string+=L 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # printf %x 38 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # string+='&' 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # printf %x 42 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # string+='*' 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # printf %x 58 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:10.659 07:32:14 -- target/invalid.sh@25 -- # string+=: 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.659 07:32:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.659 07:32:14 -- target/invalid.sh@28 -- # [[ u == \- ]] 00:13:10.659 07:32:14 -- target/invalid.sh@31 -- # echo 'urMTpYYzQk&U>B?_-]\$\=xw)Sn*\b'\''_N/|1QL&*:' 00:13:10.659 07:32:14 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'urMTpYYzQk&U>B?_-]\$\=xw)Sn*\b'\''_N/|1QL&*:' nqn.2016-06.io.spdk:cnode765 00:13:10.659 [2024-10-07 07:32:14.594767] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode765: invalid model number 'urMTpYYzQk&U>B?_-]\$\=xw)Sn*\b'_N/|1QL&*:' 00:13:10.659 07:32:14 -- target/invalid.sh@58 -- # out='request: 00:13:10.659 { 00:13:10.659 "nqn": "nqn.2016-06.io.spdk:cnode765", 00:13:10.659 "model_number": "urMTpYYzQk&U>B?_-]\\$\\=xw)Sn*\\b'\''_N/|1QL&*:", 00:13:10.659 "method": "nvmf_create_subsystem", 00:13:10.659 "req_id": 1 00:13:10.659 } 00:13:10.659 Got JSON-RPC error response 00:13:10.659 response: 00:13:10.659 { 00:13:10.660 "code": -32602, 00:13:10.660 "message": "Invalid MN urMTpYYzQk&U>B?_-]\\$\\=xw)Sn*\\b'\''_N/|1QL&*:" 00:13:10.660 }' 00:13:10.660 07:32:14 -- target/invalid.sh@59 -- # [[ request: 00:13:10.660 { 00:13:10.660 "nqn": "nqn.2016-06.io.spdk:cnode765", 00:13:10.660 "model_number": "urMTpYYzQk&U>B?_-]\\$\\=xw)Sn*\\b'_N/|1QL&*:", 00:13:10.660 "method": "nvmf_create_subsystem", 00:13:10.660 "req_id": 1 00:13:10.660 } 00:13:10.660 Got JSON-RPC error response 00:13:10.660 response: 00:13:10.660 { 00:13:10.660 "code": -32602, 00:13:10.660 "message": "Invalid MN urMTpYYzQk&U>B?_-]\\$\\=xw)Sn*\\b'_N/|1QL&*:" 00:13:10.660 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:10.660 07:32:14 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:10.918 [2024-10-07 07:32:14.787439] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.918 07:32:14 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:11.176 07:32:15 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:11.176 07:32:15 -- target/invalid.sh@67 -- # echo '' 00:13:11.176 07:32:15 -- target/invalid.sh@67 -- # head -n 1 00:13:11.176 07:32:15 -- target/invalid.sh@67 -- # IP= 00:13:11.176 07:32:15 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:11.435 [2024-10-07 07:32:15.178175] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:11.435 07:32:15 -- target/invalid.sh@69 -- # out='request: 00:13:11.435 { 00:13:11.435 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:11.435 "listen_address": { 00:13:11.435 "trtype": "tcp", 00:13:11.435 "traddr": "", 00:13:11.435 "trsvcid": "4421" 00:13:11.435 }, 00:13:11.435 "method": "nvmf_subsystem_remove_listener", 00:13:11.435 "req_id": 1 00:13:11.435 } 00:13:11.435 Got JSON-RPC error response 00:13:11.435 response: 00:13:11.435 { 00:13:11.435 "code": -32602, 00:13:11.435 "message": "Invalid parameters" 00:13:11.435 }' 00:13:11.435 07:32:15 -- target/invalid.sh@70 -- # [[ request: 00:13:11.435 { 00:13:11.435 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:11.435 "listen_address": { 00:13:11.435 "trtype": "tcp", 00:13:11.435 "traddr": "", 00:13:11.435 "trsvcid": "4421" 00:13:11.435 }, 00:13:11.435 "method": "nvmf_subsystem_remove_listener", 00:13:11.435 "req_id": 1 00:13:11.435 } 00:13:11.435 Got JSON-RPC error response 00:13:11.435 response: 00:13:11.435 { 00:13:11.435 "code": -32602, 00:13:11.435 "message": "Invalid parameters" 00:13:11.435 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:11.435 07:32:15 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20435 -i 0 00:13:11.435 [2024-10-07 07:32:15.358747] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20435: invalid cntlid range [0-65519] 00:13:11.435 07:32:15 -- target/invalid.sh@73 -- # out='request: 00:13:11.435 { 00:13:11.435 "nqn": "nqn.2016-06.io.spdk:cnode20435", 00:13:11.435 "min_cntlid": 0, 00:13:11.435 "method": "nvmf_create_subsystem", 00:13:11.435 "req_id": 1 00:13:11.435 } 00:13:11.435 Got JSON-RPC error response 00:13:11.435 response: 00:13:11.435 { 00:13:11.435 "code": -32602, 00:13:11.435 "message": "Invalid cntlid range [0-65519]" 00:13:11.435 }' 00:13:11.435 07:32:15 -- target/invalid.sh@74 -- # [[ request: 00:13:11.435 { 00:13:11.435 "nqn": "nqn.2016-06.io.spdk:cnode20435", 00:13:11.435 "min_cntlid": 0, 00:13:11.435 "method": "nvmf_create_subsystem", 00:13:11.435 "req_id": 1 00:13:11.435 } 00:13:11.435 Got JSON-RPC error response 00:13:11.435 response: 00:13:11.435 { 00:13:11.435 "code": -32602, 00:13:11.435 "message": "Invalid cntlid range [0-65519]" 00:13:11.435 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:11.435 07:32:15 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20942 -i 65520 00:13:11.693 [2024-10-07 07:32:15.551384] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20942: invalid cntlid range [65520-65519] 00:13:11.693 07:32:15 -- target/invalid.sh@75 -- # out='request: 00:13:11.693 { 00:13:11.693 "nqn": "nqn.2016-06.io.spdk:cnode20942", 00:13:11.693 "min_cntlid": 65520, 00:13:11.693 "method": "nvmf_create_subsystem", 00:13:11.693 "req_id": 1 00:13:11.693 } 00:13:11.693 Got JSON-RPC error response 00:13:11.693 response: 00:13:11.693 { 00:13:11.693 "code": -32602, 00:13:11.693 "message": "Invalid cntlid range [65520-65519]" 00:13:11.693 }' 00:13:11.693 07:32:15 -- target/invalid.sh@76 -- # [[ request: 00:13:11.693 { 00:13:11.693 "nqn": "nqn.2016-06.io.spdk:cnode20942", 00:13:11.693 "min_cntlid": 65520, 00:13:11.693 "method": "nvmf_create_subsystem", 00:13:11.693 "req_id": 1 00:13:11.693 } 00:13:11.693 Got JSON-RPC error response 00:13:11.693 response: 00:13:11.693 { 00:13:11.693 "code": -32602, 00:13:11.693 "message": "Invalid cntlid range [65520-65519]" 00:13:11.693 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:11.693 07:32:15 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9290 -I 0 00:13:11.953 [2024-10-07 07:32:15.756149] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9290: invalid cntlid range [1-0] 00:13:11.953 07:32:15 -- target/invalid.sh@77 -- # out='request: 00:13:11.953 { 00:13:11.953 "nqn": "nqn.2016-06.io.spdk:cnode9290", 00:13:11.953 "max_cntlid": 0, 00:13:11.953 "method": "nvmf_create_subsystem", 00:13:11.953 "req_id": 1 00:13:11.953 } 00:13:11.953 Got JSON-RPC error response 00:13:11.953 response: 00:13:11.953 { 00:13:11.953 "code": -32602, 00:13:11.953 "message": "Invalid cntlid range [1-0]" 00:13:11.953 }' 00:13:11.953 07:32:15 -- target/invalid.sh@78 -- # [[ request: 00:13:11.953 { 00:13:11.953 "nqn": "nqn.2016-06.io.spdk:cnode9290", 00:13:11.953 "max_cntlid": 0, 00:13:11.953 "method": "nvmf_create_subsystem", 00:13:11.953 "req_id": 1 00:13:11.953 } 00:13:11.953 Got JSON-RPC error response 00:13:11.953 response: 00:13:11.953 { 00:13:11.953 "code": -32602, 00:13:11.953 "message": "Invalid cntlid range [1-0]" 00:13:11.953 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:11.953 07:32:15 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16300 -I 65520 00:13:12.211 [2024-10-07 07:32:15.948792] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16300: invalid cntlid range [1-65520] 00:13:12.211 07:32:15 -- target/invalid.sh@79 -- # out='request: 00:13:12.211 { 00:13:12.211 "nqn": "nqn.2016-06.io.spdk:cnode16300", 00:13:12.211 "max_cntlid": 65520, 00:13:12.211 "method": "nvmf_create_subsystem", 00:13:12.211 "req_id": 1 00:13:12.211 } 00:13:12.211 Got JSON-RPC error response 00:13:12.211 response: 00:13:12.211 { 00:13:12.211 "code": -32602, 00:13:12.211 "message": "Invalid cntlid range [1-65520]" 00:13:12.211 }' 00:13:12.211 07:32:15 -- target/invalid.sh@80 -- # [[ request: 00:13:12.211 { 00:13:12.212 "nqn": "nqn.2016-06.io.spdk:cnode16300", 00:13:12.212 "max_cntlid": 65520, 00:13:12.212 "method": "nvmf_create_subsystem", 00:13:12.212 "req_id": 1 00:13:12.212 } 00:13:12.212 Got JSON-RPC error response 00:13:12.212 response: 00:13:12.212 { 00:13:12.212 "code": -32602, 00:13:12.212 "message": "Invalid cntlid range [1-65520]" 00:13:12.212 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.212 07:32:15 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26484 -i 6 -I 5 00:13:12.212 [2024-10-07 07:32:16.137463] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26484: invalid cntlid range [6-5] 00:13:12.212 07:32:16 -- target/invalid.sh@83 -- # out='request: 00:13:12.212 { 00:13:12.212 "nqn": "nqn.2016-06.io.spdk:cnode26484", 00:13:12.212 "min_cntlid": 6, 00:13:12.212 "max_cntlid": 5, 00:13:12.212 "method": "nvmf_create_subsystem", 00:13:12.212 "req_id": 1 00:13:12.212 } 00:13:12.212 Got JSON-RPC error response 00:13:12.212 response: 00:13:12.212 { 00:13:12.212 "code": -32602, 00:13:12.212 "message": "Invalid cntlid range [6-5]" 00:13:12.212 }' 00:13:12.212 07:32:16 -- target/invalid.sh@84 -- # [[ request: 00:13:12.212 { 00:13:12.212 "nqn": "nqn.2016-06.io.spdk:cnode26484", 00:13:12.212 "min_cntlid": 6, 00:13:12.212 "max_cntlid": 5, 00:13:12.212 "method": "nvmf_create_subsystem", 00:13:12.212 "req_id": 1 00:13:12.212 } 00:13:12.212 Got JSON-RPC error response 00:13:12.212 response: 00:13:12.212 { 00:13:12.212 "code": -32602, 00:13:12.212 "message": "Invalid cntlid range [6-5]" 00:13:12.212 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.212 07:32:16 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:12.471 07:32:16 -- target/invalid.sh@87 -- # out='request: 00:13:12.471 { 00:13:12.471 "name": "foobar", 00:13:12.471 "method": "nvmf_delete_target", 00:13:12.471 "req_id": 1 00:13:12.471 } 00:13:12.471 Got JSON-RPC error response 00:13:12.471 response: 00:13:12.471 { 00:13:12.471 "code": -32602, 00:13:12.471 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:12.471 }' 00:13:12.471 07:32:16 -- target/invalid.sh@88 -- # [[ request: 00:13:12.471 { 00:13:12.471 "name": "foobar", 00:13:12.471 "method": "nvmf_delete_target", 00:13:12.471 "req_id": 1 00:13:12.471 } 00:13:12.471 Got JSON-RPC error response 00:13:12.471 response: 00:13:12.471 { 00:13:12.471 "code": -32602, 00:13:12.471 "message": "The specified target doesn't exist, cannot delete it." 00:13:12.471 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:12.471 07:32:16 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:12.471 07:32:16 -- target/invalid.sh@91 -- # nvmftestfini 00:13:12.471 07:32:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:12.471 07:32:16 -- nvmf/common.sh@116 -- # sync 00:13:12.471 07:32:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:12.471 07:32:16 -- nvmf/common.sh@119 -- # set +e 00:13:12.471 07:32:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:12.471 07:32:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:12.471 rmmod nvme_tcp 00:13:12.471 rmmod nvme_fabrics 00:13:12.471 rmmod nvme_keyring 00:13:12.471 07:32:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:12.471 07:32:16 -- nvmf/common.sh@123 -- # set -e 00:13:12.471 07:32:16 -- nvmf/common.sh@124 -- # return 0 00:13:12.471 07:32:16 -- nvmf/common.sh@477 -- # '[' -n 4046944 ']' 00:13:12.471 07:32:16 -- nvmf/common.sh@478 -- # killprocess 4046944 00:13:12.471 07:32:16 -- common/autotest_common.sh@926 -- # '[' -z 4046944 ']' 00:13:12.471 07:32:16 -- common/autotest_common.sh@930 -- # kill -0 4046944 00:13:12.471 07:32:16 -- common/autotest_common.sh@931 -- # uname 00:13:12.471 07:32:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:12.471 07:32:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4046944 00:13:12.471 07:32:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:12.471 07:32:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:12.471 07:32:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4046944' 00:13:12.471 killing process with pid 4046944 00:13:12.471 07:32:16 -- common/autotest_common.sh@945 -- # kill 4046944 00:13:12.471 07:32:16 -- common/autotest_common.sh@950 -- # wait 4046944 00:13:12.730 07:32:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:12.730 07:32:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:12.730 07:32:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:12.730 07:32:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.730 07:32:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:12.730 07:32:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.730 07:32:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.730 07:32:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.266 07:32:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:15.266 00:13:15.266 real 0m11.839s 00:13:15.266 user 0m20.108s 00:13:15.266 sys 0m4.918s 00:13:15.266 07:32:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.266 07:32:18 -- common/autotest_common.sh@10 -- # set +x 00:13:15.266 ************************************ 00:13:15.266 END TEST nvmf_invalid 00:13:15.266 ************************************ 00:13:15.266 07:32:18 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:15.266 07:32:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:15.266 07:32:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.266 07:32:18 -- common/autotest_common.sh@10 -- # set +x 00:13:15.266 ************************************ 00:13:15.266 START TEST nvmf_abort 00:13:15.266 ************************************ 00:13:15.266 07:32:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:15.266 * Looking for test storage... 00:13:15.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.266 07:32:18 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.266 07:32:18 -- nvmf/common.sh@7 -- # uname -s 00:13:15.266 07:32:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.266 07:32:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.266 07:32:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.266 07:32:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.266 07:32:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.266 07:32:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.266 07:32:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.266 07:32:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.266 07:32:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.266 07:32:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.266 07:32:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:15.266 07:32:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:15.266 07:32:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.266 07:32:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.266 07:32:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.266 07:32:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.266 07:32:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.266 07:32:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.266 07:32:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.266 07:32:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.266 07:32:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.266 07:32:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.266 07:32:18 -- paths/export.sh@5 -- # export PATH 00:13:15.266 07:32:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.266 07:32:18 -- nvmf/common.sh@46 -- # : 0 00:13:15.266 07:32:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:15.266 07:32:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:15.266 07:32:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:15.266 07:32:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.266 07:32:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.266 07:32:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:15.266 07:32:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:15.266 07:32:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:15.266 07:32:18 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:15.266 07:32:18 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:15.266 07:32:18 -- target/abort.sh@14 -- # nvmftestinit 00:13:15.266 07:32:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:15.266 07:32:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.266 07:32:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:15.267 07:32:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:15.267 07:32:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:15.267 07:32:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.267 07:32:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.267 07:32:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.267 07:32:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:15.267 07:32:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:15.267 07:32:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:15.267 07:32:18 -- common/autotest_common.sh@10 -- # set +x 00:13:20.543 07:32:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:20.543 07:32:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:20.543 07:32:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:20.543 07:32:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:20.543 07:32:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:20.543 07:32:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:20.543 07:32:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:20.543 07:32:24 -- nvmf/common.sh@294 -- # net_devs=() 00:13:20.543 07:32:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:20.543 07:32:24 -- nvmf/common.sh@295 -- # e810=() 00:13:20.543 07:32:24 -- nvmf/common.sh@295 -- # local -ga e810 00:13:20.543 07:32:24 -- nvmf/common.sh@296 -- # x722=() 00:13:20.543 07:32:24 -- nvmf/common.sh@296 -- # local -ga x722 00:13:20.543 07:32:24 -- nvmf/common.sh@297 -- # mlx=() 00:13:20.543 07:32:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:20.543 07:32:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.543 07:32:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.543 07:32:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.543 07:32:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.543 07:32:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.543 07:32:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.543 07:32:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.543 07:32:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.543 07:32:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.543 07:32:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.543 07:32:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.543 07:32:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:20.543 07:32:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:20.543 07:32:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:20.544 07:32:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:20.544 07:32:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:20.544 07:32:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:20.544 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:20.544 07:32:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:20.544 07:32:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:20.544 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:20.544 07:32:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:20.544 07:32:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:20.544 07:32:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.544 07:32:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:20.544 07:32:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.544 07:32:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:20.544 Found net devices under 0000:af:00.0: cvl_0_0 00:13:20.544 07:32:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.544 07:32:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:20.544 07:32:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.544 07:32:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:20.544 07:32:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.544 07:32:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:20.544 Found net devices under 0000:af:00.1: cvl_0_1 00:13:20.544 07:32:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.544 07:32:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:20.544 07:32:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:20.544 07:32:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:20.544 07:32:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.544 07:32:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.544 07:32:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.544 07:32:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:20.544 07:32:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.544 07:32:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.544 07:32:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:20.544 07:32:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.544 07:32:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.544 07:32:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:20.544 07:32:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:20.544 07:32:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.544 07:32:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.544 07:32:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.544 07:32:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.544 07:32:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:20.544 07:32:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.544 07:32:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.544 07:32:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.544 07:32:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:20.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:13:20.544 00:13:20.544 --- 10.0.0.2 ping statistics --- 00:13:20.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.544 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:13:20.544 07:32:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:13:20.544 00:13:20.544 --- 10.0.0.1 ping statistics --- 00:13:20.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.544 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:13:20.544 07:32:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.544 07:32:24 -- nvmf/common.sh@410 -- # return 0 00:13:20.544 07:32:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:20.544 07:32:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.544 07:32:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:20.544 07:32:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.544 07:32:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:20.544 07:32:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:20.803 07:32:24 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:20.803 07:32:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:20.803 07:32:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:20.803 07:32:24 -- common/autotest_common.sh@10 -- # set +x 00:13:20.803 07:32:24 -- nvmf/common.sh@469 -- # nvmfpid=4051274 00:13:20.803 07:32:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:20.803 07:32:24 -- nvmf/common.sh@470 -- # waitforlisten 4051274 00:13:20.803 07:32:24 -- common/autotest_common.sh@819 -- # '[' -z 4051274 ']' 00:13:20.803 07:32:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.803 07:32:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:20.803 07:32:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.803 07:32:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:20.803 07:32:24 -- common/autotest_common.sh@10 -- # set +x 00:13:20.803 [2024-10-07 07:32:24.564450] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:20.803 [2024-10-07 07:32:24.564490] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.803 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.803 [2024-10-07 07:32:24.621994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.803 [2024-10-07 07:32:24.696651] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:20.803 [2024-10-07 07:32:24.696760] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.803 [2024-10-07 07:32:24.696768] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.803 [2024-10-07 07:32:24.696774] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.803 [2024-10-07 07:32:24.696876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.803 [2024-10-07 07:32:24.696964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.803 [2024-10-07 07:32:24.696966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.742 07:32:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:21.742 07:32:25 -- common/autotest_common.sh@852 -- # return 0 00:13:21.742 07:32:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:21.742 07:32:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:21.742 07:32:25 -- common/autotest_common.sh@10 -- # set +x 00:13:21.742 07:32:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.742 07:32:25 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:21.742 07:32:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.742 07:32:25 -- common/autotest_common.sh@10 -- # set +x 00:13:21.742 [2024-10-07 07:32:25.443715] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.742 07:32:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.742 07:32:25 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:21.742 07:32:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.742 07:32:25 -- common/autotest_common.sh@10 -- # set +x 00:13:21.742 Malloc0 00:13:21.742 07:32:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.742 07:32:25 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:21.742 07:32:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.742 07:32:25 -- common/autotest_common.sh@10 -- # set +x 00:13:21.742 Delay0 00:13:21.742 07:32:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.742 07:32:25 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:21.742 07:32:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.742 07:32:25 -- common/autotest_common.sh@10 -- # set +x 00:13:21.742 07:32:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.742 07:32:25 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:21.742 07:32:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.742 07:32:25 -- common/autotest_common.sh@10 -- # set +x 00:13:21.742 07:32:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.742 07:32:25 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:21.742 07:32:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.742 07:32:25 -- common/autotest_common.sh@10 -- # set +x 00:13:21.742 [2024-10-07 07:32:25.514813] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.742 07:32:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.742 07:32:25 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:21.742 07:32:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.742 07:32:25 -- common/autotest_common.sh@10 -- # set +x 00:13:21.742 07:32:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.742 07:32:25 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:21.742 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.742 [2024-10-07 07:32:25.621211] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:24.278 Initializing NVMe Controllers 00:13:24.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:24.278 controller IO queue size 128 less than required 00:13:24.278 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:24.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:24.278 Initialization complete. Launching workers. 00:13:24.278 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 43070 00:13:24.278 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 43132, failed to submit 62 00:13:24.278 success 43070, unsuccess 62, failed 0 00:13:24.278 07:32:27 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:24.278 07:32:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.278 07:32:27 -- common/autotest_common.sh@10 -- # set +x 00:13:24.278 07:32:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.279 07:32:27 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:24.279 07:32:27 -- target/abort.sh@38 -- # nvmftestfini 00:13:24.279 07:32:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:24.279 07:32:27 -- nvmf/common.sh@116 -- # sync 00:13:24.279 07:32:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:24.279 07:32:27 -- nvmf/common.sh@119 -- # set +e 00:13:24.279 07:32:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:24.279 07:32:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:24.279 rmmod nvme_tcp 00:13:24.279 rmmod nvme_fabrics 00:13:24.279 rmmod nvme_keyring 00:13:24.279 07:32:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:24.279 07:32:27 -- nvmf/common.sh@123 -- # set -e 00:13:24.279 07:32:27 -- nvmf/common.sh@124 -- # return 0 00:13:24.279 07:32:27 -- nvmf/common.sh@477 -- # '[' -n 4051274 ']' 00:13:24.279 07:32:27 -- nvmf/common.sh@478 -- # killprocess 4051274 00:13:24.279 07:32:27 -- common/autotest_common.sh@926 -- # '[' -z 4051274 ']' 00:13:24.279 07:32:27 -- common/autotest_common.sh@930 -- # kill -0 4051274 00:13:24.279 07:32:27 -- common/autotest_common.sh@931 -- # uname 00:13:24.279 07:32:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:24.279 07:32:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4051274 00:13:24.279 07:32:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:24.279 07:32:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:24.279 07:32:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4051274' 00:13:24.279 killing process with pid 4051274 00:13:24.279 07:32:27 -- common/autotest_common.sh@945 -- # kill 4051274 00:13:24.279 07:32:27 -- common/autotest_common.sh@950 -- # wait 4051274 00:13:24.279 07:32:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:24.279 07:32:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:24.279 07:32:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:24.279 07:32:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:24.279 07:32:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:24.279 07:32:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.279 07:32:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.279 07:32:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.187 07:32:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:26.187 00:13:26.187 real 0m11.415s 00:13:26.187 user 0m13.230s 00:13:26.187 sys 0m5.332s 00:13:26.187 07:32:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.187 07:32:30 -- common/autotest_common.sh@10 -- # set +x 00:13:26.187 ************************************ 00:13:26.187 END TEST nvmf_abort 00:13:26.187 ************************************ 00:13:26.187 07:32:30 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:26.187 07:32:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:26.187 07:32:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:26.187 07:32:30 -- common/autotest_common.sh@10 -- # set +x 00:13:26.187 ************************************ 00:13:26.187 START TEST nvmf_ns_hotplug_stress 00:13:26.187 ************************************ 00:13:26.187 07:32:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:26.446 * Looking for test storage... 00:13:26.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.447 07:32:30 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.447 07:32:30 -- nvmf/common.sh@7 -- # uname -s 00:13:26.447 07:32:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.447 07:32:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.447 07:32:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.447 07:32:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.447 07:32:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.447 07:32:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.447 07:32:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.447 07:32:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.447 07:32:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.447 07:32:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.447 07:32:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:26.447 07:32:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:26.447 07:32:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.447 07:32:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.447 07:32:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.447 07:32:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.447 07:32:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.447 07:32:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.447 07:32:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.447 07:32:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.447 07:32:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.447 07:32:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.447 07:32:30 -- paths/export.sh@5 -- # export PATH 00:13:26.447 07:32:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.447 07:32:30 -- nvmf/common.sh@46 -- # : 0 00:13:26.447 07:32:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:26.447 07:32:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:26.447 07:32:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:26.447 07:32:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.447 07:32:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.447 07:32:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:26.447 07:32:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:26.447 07:32:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:26.447 07:32:30 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.447 07:32:30 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:26.447 07:32:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:26.447 07:32:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.447 07:32:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:26.447 07:32:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:26.447 07:32:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:26.447 07:32:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.447 07:32:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.447 07:32:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.447 07:32:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:26.447 07:32:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:26.447 07:32:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:26.447 07:32:30 -- common/autotest_common.sh@10 -- # set +x 00:13:31.722 07:32:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:31.722 07:32:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:31.722 07:32:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:31.722 07:32:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:31.722 07:32:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:31.722 07:32:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:31.722 07:32:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:31.722 07:32:35 -- nvmf/common.sh@294 -- # net_devs=() 00:13:31.722 07:32:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:31.722 07:32:35 -- nvmf/common.sh@295 -- # e810=() 00:13:31.722 07:32:35 -- nvmf/common.sh@295 -- # local -ga e810 00:13:31.722 07:32:35 -- nvmf/common.sh@296 -- # x722=() 00:13:31.722 07:32:35 -- nvmf/common.sh@296 -- # local -ga x722 00:13:31.722 07:32:35 -- nvmf/common.sh@297 -- # mlx=() 00:13:31.722 07:32:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:31.722 07:32:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.722 07:32:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.722 07:32:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.722 07:32:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.722 07:32:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.722 07:32:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.722 07:32:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.723 07:32:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.723 07:32:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.723 07:32:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.723 07:32:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.723 07:32:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:31.723 07:32:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:31.723 07:32:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:31.723 07:32:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:31.723 07:32:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:31.723 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:31.723 07:32:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:31.723 07:32:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:31.723 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:31.723 07:32:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:31.723 07:32:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:31.723 07:32:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.723 07:32:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:31.723 07:32:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.723 07:32:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:31.723 Found net devices under 0000:af:00.0: cvl_0_0 00:13:31.723 07:32:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.723 07:32:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:31.723 07:32:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.723 07:32:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:31.723 07:32:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.723 07:32:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:31.723 Found net devices under 0000:af:00.1: cvl_0_1 00:13:31.723 07:32:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.723 07:32:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:31.723 07:32:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:31.723 07:32:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:31.723 07:32:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.723 07:32:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.723 07:32:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.723 07:32:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:31.723 07:32:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.723 07:32:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.723 07:32:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:31.723 07:32:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.723 07:32:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.723 07:32:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:31.723 07:32:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:31.723 07:32:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.723 07:32:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.723 07:32:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.723 07:32:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.723 07:32:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:31.723 07:32:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.723 07:32:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.723 07:32:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.723 07:32:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:31.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:13:31.723 00:13:31.723 --- 10.0.0.2 ping statistics --- 00:13:31.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.723 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:13:31.723 07:32:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:13:31.723 00:13:31.723 --- 10.0.0.1 ping statistics --- 00:13:31.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.723 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:13:31.723 07:32:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.723 07:32:35 -- nvmf/common.sh@410 -- # return 0 00:13:31.723 07:32:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:31.723 07:32:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.723 07:32:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:31.723 07:32:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.723 07:32:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:31.723 07:32:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:31.723 07:32:35 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:31.723 07:32:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:31.723 07:32:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:31.723 07:32:35 -- common/autotest_common.sh@10 -- # set +x 00:13:31.723 07:32:35 -- nvmf/common.sh@469 -- # nvmfpid=4055232 00:13:31.723 07:32:35 -- nvmf/common.sh@470 -- # waitforlisten 4055232 00:13:31.723 07:32:35 -- common/autotest_common.sh@819 -- # '[' -z 4055232 ']' 00:13:31.723 07:32:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.723 07:32:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:31.723 07:32:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.723 07:32:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:31.723 07:32:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:31.723 07:32:35 -- common/autotest_common.sh@10 -- # set +x 00:13:31.723 [2024-10-07 07:32:35.439703] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:31.723 [2024-10-07 07:32:35.439744] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.723 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.723 [2024-10-07 07:32:35.497444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:31.723 [2024-10-07 07:32:35.573819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:31.723 [2024-10-07 07:32:35.573923] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.723 [2024-10-07 07:32:35.573931] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.723 [2024-10-07 07:32:35.573938] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.723 [2024-10-07 07:32:35.574040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.723 [2024-10-07 07:32:35.574066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.723 [2024-10-07 07:32:35.574066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.292 07:32:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:32.292 07:32:36 -- common/autotest_common.sh@852 -- # return 0 00:13:32.292 07:32:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:32.292 07:32:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:32.292 07:32:36 -- common/autotest_common.sh@10 -- # set +x 00:13:32.550 07:32:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.550 07:32:36 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:32.550 07:32:36 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:32.550 [2024-10-07 07:32:36.469681] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.550 07:32:36 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:32.809 07:32:36 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.068 [2024-10-07 07:32:36.835098] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.068 07:32:36 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:33.068 07:32:37 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:33.327 Malloc0 00:13:33.327 07:32:37 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:33.586 Delay0 00:13:33.586 07:32:37 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.845 07:32:37 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:33.845 NULL1 00:13:33.845 07:32:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:34.181 07:32:37 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4055712 00:13:34.182 07:32:37 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:34.182 07:32:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:34.182 07:32:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.182 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.227 Read completed with error (sct=0, sc=11) 00:13:35.227 07:32:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.486 07:32:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:35.486 07:32:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:35.745 true 00:13:35.745 07:32:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:35.745 07:32:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.683 07:32:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.683 07:32:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:36.683 07:32:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:36.942 true 00:13:36.943 07:32:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:36.943 07:32:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.202 07:32:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.202 07:32:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:37.202 07:32:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:37.461 true 00:13:37.461 07:32:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:37.461 07:32:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.840 07:32:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.840 07:32:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:38.840 07:32:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:38.840 true 00:13:38.840 07:32:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:38.840 07:32:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.778 07:32:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.038 07:32:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:40.038 07:32:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:40.038 true 00:13:40.038 07:32:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:40.038 07:32:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.297 07:32:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.556 07:32:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:40.556 07:32:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:40.556 true 00:13:40.815 07:32:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:40.815 07:32:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.755 07:32:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.012 07:32:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:42.012 07:32:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:42.270 true 00:13:42.270 07:32:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:42.270 07:32:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.205 07:32:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.205 07:32:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:43.205 07:32:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:43.465 true 00:13:43.465 07:32:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:43.465 07:32:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.724 07:32:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.984 07:32:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:43.984 07:32:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:43.984 true 00:13:43.984 07:32:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:43.984 07:32:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.362 07:32:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.362 07:32:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:45.362 07:32:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:45.620 true 00:13:45.620 07:32:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:45.620 07:32:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.556 07:32:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.556 07:32:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:46.556 07:32:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:46.814 true 00:13:46.814 07:32:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:46.814 07:32:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.072 07:32:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.072 07:32:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:47.072 07:32:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:47.330 true 00:13:47.330 07:32:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:47.330 07:32:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.705 07:32:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.705 07:32:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:48.705 07:32:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:48.963 true 00:13:48.963 07:32:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:48.963 07:32:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.898 07:32:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.898 07:32:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:49.898 07:32:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:50.156 true 00:13:50.156 07:32:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:50.156 07:32:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.156 07:32:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.415 07:32:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:50.415 07:32:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:50.673 true 00:13:50.673 07:32:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:50.674 07:32:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.051 07:32:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.051 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.051 07:32:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:52.051 07:32:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:52.309 true 00:13:52.309 07:32:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:52.309 07:32:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.244 07:32:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.244 07:32:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:53.244 07:32:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:53.503 true 00:13:53.503 07:32:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:53.503 07:32:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.762 07:32:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.762 07:32:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:53.762 07:32:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:54.021 true 00:13:54.021 07:32:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:54.021 07:32:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.959 07:32:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.218 07:32:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:55.218 07:32:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:55.477 true 00:13:55.477 07:32:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:55.477 07:32:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.414 07:33:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.414 07:33:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:56.414 07:33:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:56.673 true 00:13:56.673 07:33:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:56.673 07:33:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.932 07:33:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.192 07:33:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:57.192 07:33:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:57.192 true 00:13:57.192 07:33:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:57.192 07:33:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.571 07:33:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.571 07:33:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:58.571 07:33:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:58.830 true 00:13:58.830 07:33:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:13:58.830 07:33:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.767 07:33:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.767 07:33:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:59.767 07:33:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:00.026 true 00:14:00.026 07:33:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:14:00.026 07:33:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.285 07:33:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.544 07:33:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:00.544 07:33:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:00.544 true 00:14:00.544 07:33:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:14:00.544 07:33:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.924 07:33:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.924 07:33:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:01.924 07:33:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:02.182 true 00:14:02.183 07:33:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:14:02.183 07:33:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.119 07:33:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.119 07:33:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:03.119 07:33:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:03.377 true 00:14:03.377 07:33:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:14:03.377 07:33:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.636 07:33:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.636 07:33:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:03.636 07:33:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:03.895 true 00:14:03.895 07:33:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:14:03.895 07:33:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.271 Initializing NVMe Controllers 00:14:05.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:05.271 Controller IO queue size 128, less than required. 00:14:05.271 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:05.271 Controller IO queue size 128, less than required. 00:14:05.271 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:05.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:05.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:05.271 Initialization complete. Launching workers. 00:14:05.271 ======================================================== 00:14:05.271 Latency(us) 00:14:05.271 Device Information : IOPS MiB/s Average min max 00:14:05.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2092.43 1.02 43968.13 1651.42 1029855.41 00:14:05.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 19995.90 9.76 6401.22 1700.14 368341.02 00:14:05.271 ======================================================== 00:14:05.271 Total : 22088.32 10.79 9959.93 1651.42 1029855.41 00:14:05.271 00:14:05.271 07:33:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.271 07:33:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:05.271 07:33:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:05.529 true 00:14:05.529 07:33:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4055712 00:14:05.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4055712) - No such process 00:14:05.529 07:33:09 -- target/ns_hotplug_stress.sh@53 -- # wait 4055712 00:14:05.529 07:33:09 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.850 07:33:09 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:05.850 07:33:09 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:05.850 07:33:09 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:05.850 07:33:09 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:05.850 07:33:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:05.851 07:33:09 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:06.110 null0 00:14:06.110 07:33:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:06.110 07:33:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:06.110 07:33:09 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:06.110 null1 00:14:06.110 07:33:10 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:06.110 07:33:10 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:06.110 07:33:10 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:06.368 null2 00:14:06.368 07:33:10 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:06.368 07:33:10 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:06.368 07:33:10 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:06.625 null3 00:14:06.626 07:33:10 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:06.626 07:33:10 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:06.626 07:33:10 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:06.626 null4 00:14:06.883 07:33:10 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:06.883 07:33:10 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:06.883 07:33:10 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:06.883 null5 00:14:06.883 07:33:10 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:06.883 07:33:10 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:06.883 07:33:10 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:07.141 null6 00:14:07.141 07:33:10 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:07.141 07:33:10 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:07.141 07:33:10 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:07.400 null7 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@66 -- # wait 4061762 4061764 4061765 4061767 4061769 4061771 4061773 4061775 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:07.400 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.660 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:07.919 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:07.919 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.919 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:07.919 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:07.919 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:07.919 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:07.919 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:07.919 07:33:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.178 07:33:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:08.178 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.178 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:08.178 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:08.178 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:08.178 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.437 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:08.696 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.696 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:08.696 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:08.696 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:08.696 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:08.696 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:08.696 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:08.696 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:08.955 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.215 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:09.215 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:09.215 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:09.215 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:09.215 07:33:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.215 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:09.475 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.475 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.475 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:09.475 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:09.475 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.475 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:09.475 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:09.475 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.733 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.992 07:33:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:10.252 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.252 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:10.252 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:10.252 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:10.252 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:10.252 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:10.252 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:10.252 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:10.511 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:10.770 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:10.771 07:33:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:11.030 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.030 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.030 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:11.030 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:11.030 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:11.030 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.030 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:11.030 07:33:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:11.289 07:33:15 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:11.289 07:33:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:11.289 07:33:15 -- nvmf/common.sh@116 -- # sync 00:14:11.289 07:33:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:11.289 07:33:15 -- nvmf/common.sh@119 -- # set +e 00:14:11.289 07:33:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:11.289 07:33:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:11.289 rmmod nvme_tcp 00:14:11.289 rmmod nvme_fabrics 00:14:11.289 rmmod nvme_keyring 00:14:11.289 07:33:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:11.289 07:33:15 -- nvmf/common.sh@123 -- # set -e 00:14:11.289 07:33:15 -- nvmf/common.sh@124 -- # return 0 00:14:11.289 07:33:15 -- nvmf/common.sh@477 -- # '[' -n 4055232 ']' 00:14:11.289 07:33:15 -- nvmf/common.sh@478 -- # killprocess 4055232 00:14:11.289 07:33:15 -- common/autotest_common.sh@926 -- # '[' -z 4055232 ']' 00:14:11.289 07:33:15 -- common/autotest_common.sh@930 -- # kill -0 4055232 00:14:11.289 07:33:15 -- common/autotest_common.sh@931 -- # uname 00:14:11.289 07:33:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:11.289 07:33:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4055232 00:14:11.289 07:33:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:11.289 07:33:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:11.289 07:33:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4055232' 00:14:11.289 killing process with pid 4055232 00:14:11.289 07:33:15 -- common/autotest_common.sh@945 -- # kill 4055232 00:14:11.289 07:33:15 -- common/autotest_common.sh@950 -- # wait 4055232 00:14:11.572 07:33:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:11.572 07:33:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:11.572 07:33:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:11.572 07:33:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:11.572 07:33:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:11.572 07:33:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.572 07:33:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.572 07:33:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.109 07:33:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:14.109 00:14:14.109 real 0m47.317s 00:14:14.109 user 3m12.295s 00:14:14.109 sys 0m14.637s 00:14:14.109 07:33:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:14.109 07:33:17 -- common/autotest_common.sh@10 -- # set +x 00:14:14.109 ************************************ 00:14:14.109 END TEST nvmf_ns_hotplug_stress 00:14:14.109 ************************************ 00:14:14.109 07:33:17 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:14.109 07:33:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:14.109 07:33:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:14.109 07:33:17 -- common/autotest_common.sh@10 -- # set +x 00:14:14.109 ************************************ 00:14:14.109 START TEST nvmf_connect_stress 00:14:14.109 ************************************ 00:14:14.109 07:33:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:14.109 * Looking for test storage... 00:14:14.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.109 07:33:17 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.109 07:33:17 -- nvmf/common.sh@7 -- # uname -s 00:14:14.109 07:33:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.109 07:33:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.109 07:33:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.109 07:33:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.109 07:33:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.109 07:33:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.110 07:33:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.110 07:33:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.110 07:33:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.110 07:33:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.110 07:33:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:14.110 07:33:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:14.110 07:33:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.110 07:33:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.110 07:33:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.110 07:33:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:14.110 07:33:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.110 07:33:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.110 07:33:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.110 07:33:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.110 07:33:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.110 07:33:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.110 07:33:17 -- paths/export.sh@5 -- # export PATH 00:14:14.110 07:33:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.110 07:33:17 -- nvmf/common.sh@46 -- # : 0 00:14:14.110 07:33:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:14.110 07:33:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:14.110 07:33:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:14.110 07:33:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.110 07:33:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.110 07:33:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:14.110 07:33:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:14.110 07:33:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:14.110 07:33:17 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:14.110 07:33:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:14.110 07:33:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.110 07:33:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:14.110 07:33:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:14.110 07:33:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:14.110 07:33:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.110 07:33:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.110 07:33:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.110 07:33:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:14.110 07:33:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:14.110 07:33:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:14.110 07:33:17 -- common/autotest_common.sh@10 -- # set +x 00:14:19.458 07:33:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:19.458 07:33:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:19.458 07:33:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:19.458 07:33:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:19.458 07:33:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:19.458 07:33:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:19.458 07:33:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:19.458 07:33:22 -- nvmf/common.sh@294 -- # net_devs=() 00:14:19.458 07:33:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:19.458 07:33:22 -- nvmf/common.sh@295 -- # e810=() 00:14:19.458 07:33:22 -- nvmf/common.sh@295 -- # local -ga e810 00:14:19.458 07:33:22 -- nvmf/common.sh@296 -- # x722=() 00:14:19.458 07:33:22 -- nvmf/common.sh@296 -- # local -ga x722 00:14:19.458 07:33:22 -- nvmf/common.sh@297 -- # mlx=() 00:14:19.458 07:33:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:19.458 07:33:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.458 07:33:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.458 07:33:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.458 07:33:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.458 07:33:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.458 07:33:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.458 07:33:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.458 07:33:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.458 07:33:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.458 07:33:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.458 07:33:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.458 07:33:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:19.458 07:33:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:19.458 07:33:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:19.458 07:33:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:19.458 07:33:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:19.458 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:19.458 07:33:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:19.458 07:33:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:19.458 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:19.458 07:33:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:19.458 07:33:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:19.458 07:33:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.458 07:33:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:19.458 07:33:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.458 07:33:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:19.458 Found net devices under 0000:af:00.0: cvl_0_0 00:14:19.458 07:33:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.458 07:33:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:19.458 07:33:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.458 07:33:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:19.458 07:33:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.458 07:33:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:19.458 Found net devices under 0000:af:00.1: cvl_0_1 00:14:19.458 07:33:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.458 07:33:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:19.458 07:33:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:19.458 07:33:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:19.458 07:33:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:19.458 07:33:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.458 07:33:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.458 07:33:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.459 07:33:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:19.459 07:33:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.459 07:33:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.459 07:33:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:19.459 07:33:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.459 07:33:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.459 07:33:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:19.459 07:33:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:19.459 07:33:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.459 07:33:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.459 07:33:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.459 07:33:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.459 07:33:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:19.459 07:33:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.459 07:33:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.459 07:33:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.459 07:33:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:19.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:14:19.459 00:14:19.459 --- 10.0.0.2 ping statistics --- 00:14:19.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.459 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:14:19.459 07:33:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:14:19.459 00:14:19.459 --- 10.0.0.1 ping statistics --- 00:14:19.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.459 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:14:19.459 07:33:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.459 07:33:22 -- nvmf/common.sh@410 -- # return 0 00:14:19.459 07:33:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:19.459 07:33:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.459 07:33:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:19.459 07:33:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:19.459 07:33:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.459 07:33:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:19.459 07:33:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:19.459 07:33:23 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:19.459 07:33:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:19.459 07:33:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:19.459 07:33:23 -- common/autotest_common.sh@10 -- # set +x 00:14:19.459 07:33:23 -- nvmf/common.sh@469 -- # nvmfpid=4066020 00:14:19.459 07:33:23 -- nvmf/common.sh@470 -- # waitforlisten 4066020 00:14:19.459 07:33:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:19.459 07:33:23 -- common/autotest_common.sh@819 -- # '[' -z 4066020 ']' 00:14:19.459 07:33:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.459 07:33:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:19.459 07:33:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.459 07:33:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:19.459 07:33:23 -- common/autotest_common.sh@10 -- # set +x 00:14:19.459 [2024-10-07 07:33:23.042987] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:19.459 [2024-10-07 07:33:23.043028] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.459 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.459 [2024-10-07 07:33:23.100200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:19.459 [2024-10-07 07:33:23.173896] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:19.459 [2024-10-07 07:33:23.174009] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.459 [2024-10-07 07:33:23.174016] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.459 [2024-10-07 07:33:23.174023] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.459 [2024-10-07 07:33:23.174123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.459 [2024-10-07 07:33:23.174217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:19.459 [2024-10-07 07:33:23.174218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.083 07:33:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:20.083 07:33:23 -- common/autotest_common.sh@852 -- # return 0 00:14:20.083 07:33:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:20.083 07:33:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:20.083 07:33:23 -- common/autotest_common.sh@10 -- # set +x 00:14:20.083 07:33:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.083 07:33:23 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.083 07:33:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.083 07:33:23 -- common/autotest_common.sh@10 -- # set +x 00:14:20.083 [2024-10-07 07:33:23.918098] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.083 07:33:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.083 07:33:23 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:20.083 07:33:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.084 07:33:23 -- common/autotest_common.sh@10 -- # set +x 00:14:20.084 07:33:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.084 07:33:23 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.084 07:33:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.084 07:33:23 -- common/autotest_common.sh@10 -- # set +x 00:14:20.084 [2024-10-07 07:33:23.953190] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.084 07:33:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.084 07:33:23 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:20.084 07:33:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.084 07:33:23 -- common/autotest_common.sh@10 -- # set +x 00:14:20.084 NULL1 00:14:20.084 07:33:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.084 07:33:23 -- target/connect_stress.sh@21 -- # PERF_PID=4066117 00:14:20.084 07:33:23 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:20.084 07:33:23 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:20.084 07:33:23 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:20.084 07:33:23 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:20.084 07:33:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:23 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:23 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:23 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:23 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:23 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:23 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.084 07:33:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:24 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:24 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:24 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:24 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:24 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:24 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:24 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:24 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:24 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:24 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:24 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:24 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:24 -- target/connect_stress.sh@28 -- # cat 00:14:20.084 07:33:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.084 07:33:24 -- target/connect_stress.sh@28 -- # cat 00:14:20.342 07:33:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:20.342 07:33:24 -- target/connect_stress.sh@28 -- # cat 00:14:20.342 07:33:24 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:20.342 07:33:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.342 07:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.342 07:33:24 -- common/autotest_common.sh@10 -- # set +x 00:14:20.601 07:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.601 07:33:24 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:20.601 07:33:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.601 07:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.601 07:33:24 -- common/autotest_common.sh@10 -- # set +x 00:14:20.859 07:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.859 07:33:24 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:20.859 07:33:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.859 07:33:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.859 07:33:24 -- common/autotest_common.sh@10 -- # set +x 00:14:21.116 07:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.116 07:33:25 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:21.116 07:33:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.117 07:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.117 07:33:25 -- common/autotest_common.sh@10 -- # set +x 00:14:21.684 07:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.684 07:33:25 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:21.684 07:33:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.684 07:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.684 07:33:25 -- common/autotest_common.sh@10 -- # set +x 00:14:21.943 07:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.943 07:33:25 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:21.943 07:33:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.943 07:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.943 07:33:25 -- common/autotest_common.sh@10 -- # set +x 00:14:22.201 07:33:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.201 07:33:25 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:22.201 07:33:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.201 07:33:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.201 07:33:25 -- common/autotest_common.sh@10 -- # set +x 00:14:22.460 07:33:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.460 07:33:26 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:22.460 07:33:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.460 07:33:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.460 07:33:26 -- common/autotest_common.sh@10 -- # set +x 00:14:22.719 07:33:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.719 07:33:26 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:22.719 07:33:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.719 07:33:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.719 07:33:26 -- common/autotest_common.sh@10 -- # set +x 00:14:23.285 07:33:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.285 07:33:26 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:23.285 07:33:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.285 07:33:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.285 07:33:26 -- common/autotest_common.sh@10 -- # set +x 00:14:23.543 07:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.543 07:33:27 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:23.543 07:33:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.543 07:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.543 07:33:27 -- common/autotest_common.sh@10 -- # set +x 00:14:23.802 07:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.802 07:33:27 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:23.802 07:33:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.802 07:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.802 07:33:27 -- common/autotest_common.sh@10 -- # set +x 00:14:24.061 07:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.061 07:33:27 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:24.061 07:33:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.061 07:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.061 07:33:27 -- common/autotest_common.sh@10 -- # set +x 00:14:24.319 07:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.319 07:33:28 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:24.319 07:33:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.319 07:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.319 07:33:28 -- common/autotest_common.sh@10 -- # set +x 00:14:24.885 07:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.885 07:33:28 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:24.885 07:33:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.885 07:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.885 07:33:28 -- common/autotest_common.sh@10 -- # set +x 00:14:25.142 07:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.142 07:33:28 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:25.142 07:33:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.142 07:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.142 07:33:28 -- common/autotest_common.sh@10 -- # set +x 00:14:25.400 07:33:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.400 07:33:29 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:25.400 07:33:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.400 07:33:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.400 07:33:29 -- common/autotest_common.sh@10 -- # set +x 00:14:25.659 07:33:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.659 07:33:29 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:25.659 07:33:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.659 07:33:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.659 07:33:29 -- common/autotest_common.sh@10 -- # set +x 00:14:25.918 07:33:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:25.918 07:33:29 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:25.918 07:33:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.918 07:33:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:25.918 07:33:29 -- common/autotest_common.sh@10 -- # set +x 00:14:26.484 07:33:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.484 07:33:30 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:26.484 07:33:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.484 07:33:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.484 07:33:30 -- common/autotest_common.sh@10 -- # set +x 00:14:26.743 07:33:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.743 07:33:30 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:26.743 07:33:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.743 07:33:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.743 07:33:30 -- common/autotest_common.sh@10 -- # set +x 00:14:27.001 07:33:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.001 07:33:30 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:27.001 07:33:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.001 07:33:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.001 07:33:30 -- common/autotest_common.sh@10 -- # set +x 00:14:27.260 07:33:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.260 07:33:31 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:27.260 07:33:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.260 07:33:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.260 07:33:31 -- common/autotest_common.sh@10 -- # set +x 00:14:27.518 07:33:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:27.518 07:33:31 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:27.518 07:33:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.518 07:33:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:27.518 07:33:31 -- common/autotest_common.sh@10 -- # set +x 00:14:28.085 07:33:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.085 07:33:31 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:28.085 07:33:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.085 07:33:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.085 07:33:31 -- common/autotest_common.sh@10 -- # set +x 00:14:28.343 07:33:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.343 07:33:32 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:28.343 07:33:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.343 07:33:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.343 07:33:32 -- common/autotest_common.sh@10 -- # set +x 00:14:28.602 07:33:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.602 07:33:32 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:28.602 07:33:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.602 07:33:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.602 07:33:32 -- common/autotest_common.sh@10 -- # set +x 00:14:28.860 07:33:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.860 07:33:32 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:28.860 07:33:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.860 07:33:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.860 07:33:32 -- common/autotest_common.sh@10 -- # set +x 00:14:29.427 07:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.427 07:33:33 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:29.427 07:33:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.427 07:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.427 07:33:33 -- common/autotest_common.sh@10 -- # set +x 00:14:29.687 07:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.687 07:33:33 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:29.687 07:33:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.687 07:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.687 07:33:33 -- common/autotest_common.sh@10 -- # set +x 00:14:29.945 07:33:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.945 07:33:33 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:29.945 07:33:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.945 07:33:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.945 07:33:33 -- common/autotest_common.sh@10 -- # set +x 00:14:30.203 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:30.203 07:33:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.203 07:33:34 -- target/connect_stress.sh@34 -- # kill -0 4066117 00:14:30.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4066117) - No such process 00:14:30.203 07:33:34 -- target/connect_stress.sh@38 -- # wait 4066117 00:14:30.203 07:33:34 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:30.203 07:33:34 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:30.203 07:33:34 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:30.203 07:33:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:30.203 07:33:34 -- nvmf/common.sh@116 -- # sync 00:14:30.203 07:33:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:30.203 07:33:34 -- nvmf/common.sh@119 -- # set +e 00:14:30.203 07:33:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:30.203 07:33:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:30.203 rmmod nvme_tcp 00:14:30.203 rmmod nvme_fabrics 00:14:30.203 rmmod nvme_keyring 00:14:30.203 07:33:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:30.203 07:33:34 -- nvmf/common.sh@123 -- # set -e 00:14:30.203 07:33:34 -- nvmf/common.sh@124 -- # return 0 00:14:30.203 07:33:34 -- nvmf/common.sh@477 -- # '[' -n 4066020 ']' 00:14:30.203 07:33:34 -- nvmf/common.sh@478 -- # killprocess 4066020 00:14:30.203 07:33:34 -- common/autotest_common.sh@926 -- # '[' -z 4066020 ']' 00:14:30.203 07:33:34 -- common/autotest_common.sh@930 -- # kill -0 4066020 00:14:30.203 07:33:34 -- common/autotest_common.sh@931 -- # uname 00:14:30.203 07:33:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:30.203 07:33:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4066020 00:14:30.463 07:33:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:30.463 07:33:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:30.463 07:33:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4066020' 00:14:30.463 killing process with pid 4066020 00:14:30.463 07:33:34 -- common/autotest_common.sh@945 -- # kill 4066020 00:14:30.463 07:33:34 -- common/autotest_common.sh@950 -- # wait 4066020 00:14:30.463 07:33:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:30.463 07:33:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:30.463 07:33:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:30.463 07:33:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.463 07:33:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:30.463 07:33:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.463 07:33:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.463 07:33:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.000 07:33:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:33.000 00:14:33.000 real 0m18.943s 00:14:33.000 user 0m40.807s 00:14:33.000 sys 0m8.037s 00:14:33.000 07:33:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.000 07:33:36 -- common/autotest_common.sh@10 -- # set +x 00:14:33.000 ************************************ 00:14:33.000 END TEST nvmf_connect_stress 00:14:33.000 ************************************ 00:14:33.000 07:33:36 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:33.000 07:33:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:33.000 07:33:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:33.000 07:33:36 -- common/autotest_common.sh@10 -- # set +x 00:14:33.000 ************************************ 00:14:33.000 START TEST nvmf_fused_ordering 00:14:33.000 ************************************ 00:14:33.000 07:33:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:33.000 * Looking for test storage... 00:14:33.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.000 07:33:36 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.000 07:33:36 -- nvmf/common.sh@7 -- # uname -s 00:14:33.000 07:33:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.000 07:33:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.000 07:33:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.000 07:33:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.000 07:33:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.000 07:33:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.000 07:33:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.000 07:33:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.000 07:33:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.000 07:33:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.000 07:33:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:33.000 07:33:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:33.000 07:33:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.000 07:33:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.000 07:33:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.000 07:33:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.000 07:33:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.000 07:33:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.000 07:33:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.000 07:33:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.000 07:33:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.000 07:33:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.000 07:33:36 -- paths/export.sh@5 -- # export PATH 00:14:33.000 07:33:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.000 07:33:36 -- nvmf/common.sh@46 -- # : 0 00:14:33.000 07:33:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:33.000 07:33:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:33.000 07:33:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:33.000 07:33:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.000 07:33:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.000 07:33:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:33.000 07:33:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:33.000 07:33:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:33.000 07:33:36 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:33.000 07:33:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:33.000 07:33:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.000 07:33:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:33.000 07:33:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:33.000 07:33:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:33.000 07:33:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.000 07:33:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.000 07:33:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.000 07:33:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:33.000 07:33:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:33.000 07:33:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:33.000 07:33:36 -- common/autotest_common.sh@10 -- # set +x 00:14:38.275 07:33:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:38.275 07:33:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:38.275 07:33:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:38.275 07:33:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:38.275 07:33:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:38.275 07:33:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:38.275 07:33:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:38.275 07:33:41 -- nvmf/common.sh@294 -- # net_devs=() 00:14:38.275 07:33:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:38.275 07:33:41 -- nvmf/common.sh@295 -- # e810=() 00:14:38.275 07:33:41 -- nvmf/common.sh@295 -- # local -ga e810 00:14:38.275 07:33:41 -- nvmf/common.sh@296 -- # x722=() 00:14:38.275 07:33:41 -- nvmf/common.sh@296 -- # local -ga x722 00:14:38.275 07:33:41 -- nvmf/common.sh@297 -- # mlx=() 00:14:38.275 07:33:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:38.275 07:33:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.275 07:33:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.275 07:33:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.275 07:33:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.275 07:33:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.275 07:33:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.275 07:33:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.275 07:33:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.275 07:33:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.275 07:33:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.275 07:33:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.275 07:33:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:38.276 07:33:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:38.276 07:33:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:38.276 07:33:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:38.276 07:33:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:38.276 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:38.276 07:33:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:38.276 07:33:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:38.276 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:38.276 07:33:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:38.276 07:33:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:38.276 07:33:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.276 07:33:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:38.276 07:33:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.276 07:33:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:38.276 Found net devices under 0000:af:00.0: cvl_0_0 00:14:38.276 07:33:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.276 07:33:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:38.276 07:33:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.276 07:33:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:38.276 07:33:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.276 07:33:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:38.276 Found net devices under 0000:af:00.1: cvl_0_1 00:14:38.276 07:33:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.276 07:33:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:38.276 07:33:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:38.276 07:33:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:38.276 07:33:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:38.276 07:33:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.276 07:33:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.276 07:33:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.276 07:33:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:38.276 07:33:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.276 07:33:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.276 07:33:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:38.276 07:33:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.276 07:33:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.276 07:33:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:38.276 07:33:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:38.276 07:33:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.276 07:33:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.276 07:33:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.276 07:33:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.276 07:33:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:38.276 07:33:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.276 07:33:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.276 07:33:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.276 07:33:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:38.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:14:38.276 00:14:38.276 --- 10.0.0.2 ping statistics --- 00:14:38.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.276 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:14:38.276 07:33:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:14:38.276 00:14:38.276 --- 10.0.0.1 ping statistics --- 00:14:38.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.276 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:14:38.276 07:33:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.276 07:33:42 -- nvmf/common.sh@410 -- # return 0 00:14:38.276 07:33:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:38.276 07:33:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.276 07:33:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:38.276 07:33:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:38.276 07:33:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.276 07:33:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:38.276 07:33:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:38.276 07:33:42 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:38.276 07:33:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:38.276 07:33:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:38.276 07:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:38.276 07:33:42 -- nvmf/common.sh@469 -- # nvmfpid=4071227 00:14:38.276 07:33:42 -- nvmf/common.sh@470 -- # waitforlisten 4071227 00:14:38.276 07:33:42 -- common/autotest_common.sh@819 -- # '[' -z 4071227 ']' 00:14:38.276 07:33:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.276 07:33:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:38.276 07:33:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:38.276 07:33:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.276 07:33:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:38.276 07:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:38.276 [2024-10-07 07:33:42.107939] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:38.276 [2024-10-07 07:33:42.107982] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.276 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.276 [2024-10-07 07:33:42.166457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.277 [2024-10-07 07:33:42.240981] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:38.277 [2024-10-07 07:33:42.241097] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.277 [2024-10-07 07:33:42.241109] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.277 [2024-10-07 07:33:42.241117] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.277 [2024-10-07 07:33:42.241132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.212 07:33:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:39.212 07:33:42 -- common/autotest_common.sh@852 -- # return 0 00:14:39.212 07:33:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:39.212 07:33:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:39.212 07:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:39.212 07:33:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.212 07:33:42 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.212 07:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.212 07:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:39.212 [2024-10-07 07:33:42.959320] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.212 07:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.212 07:33:42 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:39.212 07:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.212 07:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:39.212 07:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.212 07:33:42 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.212 07:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.212 07:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:39.212 [2024-10-07 07:33:42.975464] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.212 07:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.212 07:33:42 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:39.212 07:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.212 07:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:39.212 NULL1 00:14:39.212 07:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.212 07:33:42 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:39.212 07:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.212 07:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:39.212 07:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.212 07:33:42 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:39.212 07:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.212 07:33:42 -- common/autotest_common.sh@10 -- # set +x 00:14:39.212 07:33:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.212 07:33:43 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:39.212 [2024-10-07 07:33:43.023978] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:39.212 [2024-10-07 07:33:43.024019] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071457 ] 00:14:39.212 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.472 Attached to nqn.2016-06.io.spdk:cnode1 00:14:39.472 Namespace ID: 1 size: 1GB 00:14:39.472 fused_ordering(0) 00:14:39.472 fused_ordering(1) 00:14:39.472 fused_ordering(2) 00:14:39.472 fused_ordering(3) 00:14:39.472 fused_ordering(4) 00:14:39.472 fused_ordering(5) 00:14:39.472 fused_ordering(6) 00:14:39.472 fused_ordering(7) 00:14:39.472 fused_ordering(8) 00:14:39.472 fused_ordering(9) 00:14:39.472 fused_ordering(10) 00:14:39.472 fused_ordering(11) 00:14:39.472 fused_ordering(12) 00:14:39.472 fused_ordering(13) 00:14:39.472 fused_ordering(14) 00:14:39.472 fused_ordering(15) 00:14:39.472 fused_ordering(16) 00:14:39.472 fused_ordering(17) 00:14:39.472 fused_ordering(18) 00:14:39.472 fused_ordering(19) 00:14:39.472 fused_ordering(20) 00:14:39.472 fused_ordering(21) 00:14:39.472 fused_ordering(22) 00:14:39.472 fused_ordering(23) 00:14:39.472 fused_ordering(24) 00:14:39.472 fused_ordering(25) 00:14:39.472 fused_ordering(26) 00:14:39.472 fused_ordering(27) 00:14:39.472 fused_ordering(28) 00:14:39.472 fused_ordering(29) 00:14:39.472 fused_ordering(30) 00:14:39.472 fused_ordering(31) 00:14:39.472 fused_ordering(32) 00:14:39.472 fused_ordering(33) 00:14:39.472 fused_ordering(34) 00:14:39.472 fused_ordering(35) 00:14:39.472 fused_ordering(36) 00:14:39.472 fused_ordering(37) 00:14:39.472 fused_ordering(38) 00:14:39.472 fused_ordering(39) 00:14:39.472 fused_ordering(40) 00:14:39.472 fused_ordering(41) 00:14:39.472 fused_ordering(42) 00:14:39.472 fused_ordering(43) 00:14:39.472 fused_ordering(44) 00:14:39.472 fused_ordering(45) 00:14:39.472 fused_ordering(46) 00:14:39.472 fused_ordering(47) 00:14:39.472 fused_ordering(48) 00:14:39.472 fused_ordering(49) 00:14:39.472 fused_ordering(50) 00:14:39.472 fused_ordering(51) 00:14:39.472 fused_ordering(52) 00:14:39.472 fused_ordering(53) 00:14:39.472 fused_ordering(54) 00:14:39.472 fused_ordering(55) 00:14:39.472 fused_ordering(56) 00:14:39.472 fused_ordering(57) 00:14:39.472 fused_ordering(58) 00:14:39.472 fused_ordering(59) 00:14:39.472 fused_ordering(60) 00:14:39.472 fused_ordering(61) 00:14:39.472 fused_ordering(62) 00:14:39.472 fused_ordering(63) 00:14:39.472 fused_ordering(64) 00:14:39.472 fused_ordering(65) 00:14:39.472 fused_ordering(66) 00:14:39.472 fused_ordering(67) 00:14:39.472 fused_ordering(68) 00:14:39.472 fused_ordering(69) 00:14:39.472 fused_ordering(70) 00:14:39.473 fused_ordering(71) 00:14:39.473 fused_ordering(72) 00:14:39.473 fused_ordering(73) 00:14:39.473 fused_ordering(74) 00:14:39.473 fused_ordering(75) 00:14:39.473 fused_ordering(76) 00:14:39.473 fused_ordering(77) 00:14:39.473 fused_ordering(78) 00:14:39.473 fused_ordering(79) 00:14:39.473 fused_ordering(80) 00:14:39.473 fused_ordering(81) 00:14:39.473 fused_ordering(82) 00:14:39.473 fused_ordering(83) 00:14:39.473 fused_ordering(84) 00:14:39.473 fused_ordering(85) 00:14:39.473 fused_ordering(86) 00:14:39.473 fused_ordering(87) 00:14:39.473 fused_ordering(88) 00:14:39.473 fused_ordering(89) 00:14:39.473 fused_ordering(90) 00:14:39.473 fused_ordering(91) 00:14:39.473 fused_ordering(92) 00:14:39.473 fused_ordering(93) 00:14:39.473 fused_ordering(94) 00:14:39.473 fused_ordering(95) 00:14:39.473 fused_ordering(96) 00:14:39.473 fused_ordering(97) 00:14:39.473 fused_ordering(98) 00:14:39.473 fused_ordering(99) 00:14:39.473 fused_ordering(100) 00:14:39.473 fused_ordering(101) 00:14:39.473 fused_ordering(102) 00:14:39.473 fused_ordering(103) 00:14:39.473 fused_ordering(104) 00:14:39.473 fused_ordering(105) 00:14:39.473 fused_ordering(106) 00:14:39.473 fused_ordering(107) 00:14:39.473 fused_ordering(108) 00:14:39.473 fused_ordering(109) 00:14:39.473 fused_ordering(110) 00:14:39.473 fused_ordering(111) 00:14:39.473 fused_ordering(112) 00:14:39.473 fused_ordering(113) 00:14:39.473 fused_ordering(114) 00:14:39.473 fused_ordering(115) 00:14:39.473 fused_ordering(116) 00:14:39.473 fused_ordering(117) 00:14:39.473 fused_ordering(118) 00:14:39.473 fused_ordering(119) 00:14:39.473 fused_ordering(120) 00:14:39.473 fused_ordering(121) 00:14:39.473 fused_ordering(122) 00:14:39.473 fused_ordering(123) 00:14:39.473 fused_ordering(124) 00:14:39.473 fused_ordering(125) 00:14:39.473 fused_ordering(126) 00:14:39.473 fused_ordering(127) 00:14:39.473 fused_ordering(128) 00:14:39.473 fused_ordering(129) 00:14:39.473 fused_ordering(130) 00:14:39.473 fused_ordering(131) 00:14:39.473 fused_ordering(132) 00:14:39.473 fused_ordering(133) 00:14:39.473 fused_ordering(134) 00:14:39.473 fused_ordering(135) 00:14:39.473 fused_ordering(136) 00:14:39.473 fused_ordering(137) 00:14:39.473 fused_ordering(138) 00:14:39.473 fused_ordering(139) 00:14:39.473 fused_ordering(140) 00:14:39.473 fused_ordering(141) 00:14:39.473 fused_ordering(142) 00:14:39.473 fused_ordering(143) 00:14:39.473 fused_ordering(144) 00:14:39.473 fused_ordering(145) 00:14:39.473 fused_ordering(146) 00:14:39.473 fused_ordering(147) 00:14:39.473 fused_ordering(148) 00:14:39.473 fused_ordering(149) 00:14:39.473 fused_ordering(150) 00:14:39.473 fused_ordering(151) 00:14:39.473 fused_ordering(152) 00:14:39.473 fused_ordering(153) 00:14:39.473 fused_ordering(154) 00:14:39.473 fused_ordering(155) 00:14:39.473 fused_ordering(156) 00:14:39.473 fused_ordering(157) 00:14:39.473 fused_ordering(158) 00:14:39.473 fused_ordering(159) 00:14:39.473 fused_ordering(160) 00:14:39.473 fused_ordering(161) 00:14:39.473 fused_ordering(162) 00:14:39.473 fused_ordering(163) 00:14:39.473 fused_ordering(164) 00:14:39.473 fused_ordering(165) 00:14:39.473 fused_ordering(166) 00:14:39.473 fused_ordering(167) 00:14:39.473 fused_ordering(168) 00:14:39.473 fused_ordering(169) 00:14:39.473 fused_ordering(170) 00:14:39.473 fused_ordering(171) 00:14:39.473 fused_ordering(172) 00:14:39.473 fused_ordering(173) 00:14:39.473 fused_ordering(174) 00:14:39.473 fused_ordering(175) 00:14:39.473 fused_ordering(176) 00:14:39.473 fused_ordering(177) 00:14:39.473 fused_ordering(178) 00:14:39.473 fused_ordering(179) 00:14:39.473 fused_ordering(180) 00:14:39.473 fused_ordering(181) 00:14:39.473 fused_ordering(182) 00:14:39.473 fused_ordering(183) 00:14:39.473 fused_ordering(184) 00:14:39.473 fused_ordering(185) 00:14:39.473 fused_ordering(186) 00:14:39.473 fused_ordering(187) 00:14:39.473 fused_ordering(188) 00:14:39.473 fused_ordering(189) 00:14:39.473 fused_ordering(190) 00:14:39.473 fused_ordering(191) 00:14:39.473 fused_ordering(192) 00:14:39.473 fused_ordering(193) 00:14:39.473 fused_ordering(194) 00:14:39.473 fused_ordering(195) 00:14:39.473 fused_ordering(196) 00:14:39.473 fused_ordering(197) 00:14:39.473 fused_ordering(198) 00:14:39.473 fused_ordering(199) 00:14:39.473 fused_ordering(200) 00:14:39.473 fused_ordering(201) 00:14:39.473 fused_ordering(202) 00:14:39.473 fused_ordering(203) 00:14:39.473 fused_ordering(204) 00:14:39.473 fused_ordering(205) 00:14:39.733 fused_ordering(206) 00:14:39.733 fused_ordering(207) 00:14:39.733 fused_ordering(208) 00:14:39.733 fused_ordering(209) 00:14:39.733 fused_ordering(210) 00:14:39.733 fused_ordering(211) 00:14:39.733 fused_ordering(212) 00:14:39.733 fused_ordering(213) 00:14:39.733 fused_ordering(214) 00:14:39.733 fused_ordering(215) 00:14:39.733 fused_ordering(216) 00:14:39.733 fused_ordering(217) 00:14:39.733 fused_ordering(218) 00:14:39.733 fused_ordering(219) 00:14:39.734 fused_ordering(220) 00:14:39.734 fused_ordering(221) 00:14:39.734 fused_ordering(222) 00:14:39.734 fused_ordering(223) 00:14:39.734 fused_ordering(224) 00:14:39.734 fused_ordering(225) 00:14:39.734 fused_ordering(226) 00:14:39.734 fused_ordering(227) 00:14:39.734 fused_ordering(228) 00:14:39.734 fused_ordering(229) 00:14:39.734 fused_ordering(230) 00:14:39.734 fused_ordering(231) 00:14:39.734 fused_ordering(232) 00:14:39.734 fused_ordering(233) 00:14:39.734 fused_ordering(234) 00:14:39.734 fused_ordering(235) 00:14:39.734 fused_ordering(236) 00:14:39.734 fused_ordering(237) 00:14:39.734 fused_ordering(238) 00:14:39.734 fused_ordering(239) 00:14:39.734 fused_ordering(240) 00:14:39.734 fused_ordering(241) 00:14:39.734 fused_ordering(242) 00:14:39.734 fused_ordering(243) 00:14:39.734 fused_ordering(244) 00:14:39.734 fused_ordering(245) 00:14:39.734 fused_ordering(246) 00:14:39.734 fused_ordering(247) 00:14:39.734 fused_ordering(248) 00:14:39.734 fused_ordering(249) 00:14:39.734 fused_ordering(250) 00:14:39.734 fused_ordering(251) 00:14:39.734 fused_ordering(252) 00:14:39.734 fused_ordering(253) 00:14:39.734 fused_ordering(254) 00:14:39.734 fused_ordering(255) 00:14:39.734 fused_ordering(256) 00:14:39.734 fused_ordering(257) 00:14:39.734 fused_ordering(258) 00:14:39.734 fused_ordering(259) 00:14:39.734 fused_ordering(260) 00:14:39.734 fused_ordering(261) 00:14:39.734 fused_ordering(262) 00:14:39.734 fused_ordering(263) 00:14:39.734 fused_ordering(264) 00:14:39.734 fused_ordering(265) 00:14:39.734 fused_ordering(266) 00:14:39.734 fused_ordering(267) 00:14:39.734 fused_ordering(268) 00:14:39.734 fused_ordering(269) 00:14:39.734 fused_ordering(270) 00:14:39.734 fused_ordering(271) 00:14:39.734 fused_ordering(272) 00:14:39.734 fused_ordering(273) 00:14:39.734 fused_ordering(274) 00:14:39.734 fused_ordering(275) 00:14:39.734 fused_ordering(276) 00:14:39.734 fused_ordering(277) 00:14:39.734 fused_ordering(278) 00:14:39.734 fused_ordering(279) 00:14:39.734 fused_ordering(280) 00:14:39.734 fused_ordering(281) 00:14:39.734 fused_ordering(282) 00:14:39.734 fused_ordering(283) 00:14:39.734 fused_ordering(284) 00:14:39.734 fused_ordering(285) 00:14:39.734 fused_ordering(286) 00:14:39.734 fused_ordering(287) 00:14:39.734 fused_ordering(288) 00:14:39.734 fused_ordering(289) 00:14:39.734 fused_ordering(290) 00:14:39.734 fused_ordering(291) 00:14:39.734 fused_ordering(292) 00:14:39.734 fused_ordering(293) 00:14:39.734 fused_ordering(294) 00:14:39.734 fused_ordering(295) 00:14:39.734 fused_ordering(296) 00:14:39.734 fused_ordering(297) 00:14:39.734 fused_ordering(298) 00:14:39.734 fused_ordering(299) 00:14:39.734 fused_ordering(300) 00:14:39.734 fused_ordering(301) 00:14:39.734 fused_ordering(302) 00:14:39.734 fused_ordering(303) 00:14:39.734 fused_ordering(304) 00:14:39.734 fused_ordering(305) 00:14:39.734 fused_ordering(306) 00:14:39.734 fused_ordering(307) 00:14:39.734 fused_ordering(308) 00:14:39.734 fused_ordering(309) 00:14:39.734 fused_ordering(310) 00:14:39.734 fused_ordering(311) 00:14:39.734 fused_ordering(312) 00:14:39.734 fused_ordering(313) 00:14:39.734 fused_ordering(314) 00:14:39.734 fused_ordering(315) 00:14:39.734 fused_ordering(316) 00:14:39.734 fused_ordering(317) 00:14:39.734 fused_ordering(318) 00:14:39.734 fused_ordering(319) 00:14:39.734 fused_ordering(320) 00:14:39.734 fused_ordering(321) 00:14:39.734 fused_ordering(322) 00:14:39.734 fused_ordering(323) 00:14:39.734 fused_ordering(324) 00:14:39.734 fused_ordering(325) 00:14:39.734 fused_ordering(326) 00:14:39.734 fused_ordering(327) 00:14:39.734 fused_ordering(328) 00:14:39.734 fused_ordering(329) 00:14:39.734 fused_ordering(330) 00:14:39.734 fused_ordering(331) 00:14:39.734 fused_ordering(332) 00:14:39.734 fused_ordering(333) 00:14:39.734 fused_ordering(334) 00:14:39.734 fused_ordering(335) 00:14:39.734 fused_ordering(336) 00:14:39.734 fused_ordering(337) 00:14:39.734 fused_ordering(338) 00:14:39.734 fused_ordering(339) 00:14:39.734 fused_ordering(340) 00:14:39.734 fused_ordering(341) 00:14:39.734 fused_ordering(342) 00:14:39.734 fused_ordering(343) 00:14:39.734 fused_ordering(344) 00:14:39.734 fused_ordering(345) 00:14:39.734 fused_ordering(346) 00:14:39.734 fused_ordering(347) 00:14:39.734 fused_ordering(348) 00:14:39.734 fused_ordering(349) 00:14:39.734 fused_ordering(350) 00:14:39.734 fused_ordering(351) 00:14:39.734 fused_ordering(352) 00:14:39.734 fused_ordering(353) 00:14:39.734 fused_ordering(354) 00:14:39.734 fused_ordering(355) 00:14:39.734 fused_ordering(356) 00:14:39.734 fused_ordering(357) 00:14:39.734 fused_ordering(358) 00:14:39.734 fused_ordering(359) 00:14:39.734 fused_ordering(360) 00:14:39.734 fused_ordering(361) 00:14:39.734 fused_ordering(362) 00:14:39.734 fused_ordering(363) 00:14:39.734 fused_ordering(364) 00:14:39.734 fused_ordering(365) 00:14:39.734 fused_ordering(366) 00:14:39.734 fused_ordering(367) 00:14:39.734 fused_ordering(368) 00:14:39.734 fused_ordering(369) 00:14:39.734 fused_ordering(370) 00:14:39.734 fused_ordering(371) 00:14:39.734 fused_ordering(372) 00:14:39.734 fused_ordering(373) 00:14:39.734 fused_ordering(374) 00:14:39.734 fused_ordering(375) 00:14:39.734 fused_ordering(376) 00:14:39.734 fused_ordering(377) 00:14:39.734 fused_ordering(378) 00:14:39.734 fused_ordering(379) 00:14:39.734 fused_ordering(380) 00:14:39.734 fused_ordering(381) 00:14:39.734 fused_ordering(382) 00:14:39.734 fused_ordering(383) 00:14:39.734 fused_ordering(384) 00:14:39.734 fused_ordering(385) 00:14:39.734 fused_ordering(386) 00:14:39.734 fused_ordering(387) 00:14:39.734 fused_ordering(388) 00:14:39.734 fused_ordering(389) 00:14:39.734 fused_ordering(390) 00:14:39.734 fused_ordering(391) 00:14:39.734 fused_ordering(392) 00:14:39.734 fused_ordering(393) 00:14:39.734 fused_ordering(394) 00:14:39.734 fused_ordering(395) 00:14:39.734 fused_ordering(396) 00:14:39.734 fused_ordering(397) 00:14:39.734 fused_ordering(398) 00:14:39.734 fused_ordering(399) 00:14:39.734 fused_ordering(400) 00:14:39.734 fused_ordering(401) 00:14:39.734 fused_ordering(402) 00:14:39.734 fused_ordering(403) 00:14:39.734 fused_ordering(404) 00:14:39.735 fused_ordering(405) 00:14:39.735 fused_ordering(406) 00:14:39.735 fused_ordering(407) 00:14:39.735 fused_ordering(408) 00:14:39.735 fused_ordering(409) 00:14:39.735 fused_ordering(410) 00:14:40.304 fused_ordering(411) 00:14:40.304 fused_ordering(412) 00:14:40.304 fused_ordering(413) 00:14:40.304 fused_ordering(414) 00:14:40.304 fused_ordering(415) 00:14:40.304 fused_ordering(416) 00:14:40.304 fused_ordering(417) 00:14:40.304 fused_ordering(418) 00:14:40.304 fused_ordering(419) 00:14:40.304 fused_ordering(420) 00:14:40.304 fused_ordering(421) 00:14:40.304 fused_ordering(422) 00:14:40.304 fused_ordering(423) 00:14:40.304 fused_ordering(424) 00:14:40.304 fused_ordering(425) 00:14:40.304 fused_ordering(426) 00:14:40.304 fused_ordering(427) 00:14:40.304 fused_ordering(428) 00:14:40.304 fused_ordering(429) 00:14:40.304 fused_ordering(430) 00:14:40.304 fused_ordering(431) 00:14:40.304 fused_ordering(432) 00:14:40.304 fused_ordering(433) 00:14:40.304 fused_ordering(434) 00:14:40.304 fused_ordering(435) 00:14:40.304 fused_ordering(436) 00:14:40.304 fused_ordering(437) 00:14:40.304 fused_ordering(438) 00:14:40.304 fused_ordering(439) 00:14:40.304 fused_ordering(440) 00:14:40.304 fused_ordering(441) 00:14:40.304 fused_ordering(442) 00:14:40.304 fused_ordering(443) 00:14:40.304 fused_ordering(444) 00:14:40.304 fused_ordering(445) 00:14:40.304 fused_ordering(446) 00:14:40.304 fused_ordering(447) 00:14:40.304 fused_ordering(448) 00:14:40.304 fused_ordering(449) 00:14:40.304 fused_ordering(450) 00:14:40.304 fused_ordering(451) 00:14:40.304 fused_ordering(452) 00:14:40.304 fused_ordering(453) 00:14:40.304 fused_ordering(454) 00:14:40.304 fused_ordering(455) 00:14:40.304 fused_ordering(456) 00:14:40.304 fused_ordering(457) 00:14:40.304 fused_ordering(458) 00:14:40.304 fused_ordering(459) 00:14:40.304 fused_ordering(460) 00:14:40.304 fused_ordering(461) 00:14:40.304 fused_ordering(462) 00:14:40.304 fused_ordering(463) 00:14:40.304 fused_ordering(464) 00:14:40.304 fused_ordering(465) 00:14:40.304 fused_ordering(466) 00:14:40.304 fused_ordering(467) 00:14:40.304 fused_ordering(468) 00:14:40.304 fused_ordering(469) 00:14:40.304 fused_ordering(470) 00:14:40.304 fused_ordering(471) 00:14:40.304 fused_ordering(472) 00:14:40.304 fused_ordering(473) 00:14:40.304 fused_ordering(474) 00:14:40.304 fused_ordering(475) 00:14:40.304 fused_ordering(476) 00:14:40.304 fused_ordering(477) 00:14:40.304 fused_ordering(478) 00:14:40.304 fused_ordering(479) 00:14:40.304 fused_ordering(480) 00:14:40.304 fused_ordering(481) 00:14:40.304 fused_ordering(482) 00:14:40.304 fused_ordering(483) 00:14:40.304 fused_ordering(484) 00:14:40.304 fused_ordering(485) 00:14:40.304 fused_ordering(486) 00:14:40.304 fused_ordering(487) 00:14:40.304 fused_ordering(488) 00:14:40.304 fused_ordering(489) 00:14:40.304 fused_ordering(490) 00:14:40.304 fused_ordering(491) 00:14:40.304 fused_ordering(492) 00:14:40.304 fused_ordering(493) 00:14:40.304 fused_ordering(494) 00:14:40.304 fused_ordering(495) 00:14:40.304 fused_ordering(496) 00:14:40.304 fused_ordering(497) 00:14:40.304 fused_ordering(498) 00:14:40.304 fused_ordering(499) 00:14:40.304 fused_ordering(500) 00:14:40.304 fused_ordering(501) 00:14:40.304 fused_ordering(502) 00:14:40.304 fused_ordering(503) 00:14:40.304 fused_ordering(504) 00:14:40.304 fused_ordering(505) 00:14:40.304 fused_ordering(506) 00:14:40.304 fused_ordering(507) 00:14:40.304 fused_ordering(508) 00:14:40.304 fused_ordering(509) 00:14:40.304 fused_ordering(510) 00:14:40.304 fused_ordering(511) 00:14:40.304 fused_ordering(512) 00:14:40.304 fused_ordering(513) 00:14:40.304 fused_ordering(514) 00:14:40.304 fused_ordering(515) 00:14:40.304 fused_ordering(516) 00:14:40.304 fused_ordering(517) 00:14:40.304 fused_ordering(518) 00:14:40.304 fused_ordering(519) 00:14:40.304 fused_ordering(520) 00:14:40.304 fused_ordering(521) 00:14:40.304 fused_ordering(522) 00:14:40.304 fused_ordering(523) 00:14:40.304 fused_ordering(524) 00:14:40.304 fused_ordering(525) 00:14:40.304 fused_ordering(526) 00:14:40.304 fused_ordering(527) 00:14:40.304 fused_ordering(528) 00:14:40.304 fused_ordering(529) 00:14:40.304 fused_ordering(530) 00:14:40.304 fused_ordering(531) 00:14:40.304 fused_ordering(532) 00:14:40.304 fused_ordering(533) 00:14:40.304 fused_ordering(534) 00:14:40.304 fused_ordering(535) 00:14:40.304 fused_ordering(536) 00:14:40.304 fused_ordering(537) 00:14:40.304 fused_ordering(538) 00:14:40.304 fused_ordering(539) 00:14:40.304 fused_ordering(540) 00:14:40.304 fused_ordering(541) 00:14:40.304 fused_ordering(542) 00:14:40.304 fused_ordering(543) 00:14:40.304 fused_ordering(544) 00:14:40.304 fused_ordering(545) 00:14:40.304 fused_ordering(546) 00:14:40.304 fused_ordering(547) 00:14:40.304 fused_ordering(548) 00:14:40.304 fused_ordering(549) 00:14:40.304 fused_ordering(550) 00:14:40.304 fused_ordering(551) 00:14:40.304 fused_ordering(552) 00:14:40.304 fused_ordering(553) 00:14:40.304 fused_ordering(554) 00:14:40.304 fused_ordering(555) 00:14:40.304 fused_ordering(556) 00:14:40.304 fused_ordering(557) 00:14:40.304 fused_ordering(558) 00:14:40.304 fused_ordering(559) 00:14:40.304 fused_ordering(560) 00:14:40.304 fused_ordering(561) 00:14:40.304 fused_ordering(562) 00:14:40.304 fused_ordering(563) 00:14:40.304 fused_ordering(564) 00:14:40.304 fused_ordering(565) 00:14:40.304 fused_ordering(566) 00:14:40.304 fused_ordering(567) 00:14:40.304 fused_ordering(568) 00:14:40.304 fused_ordering(569) 00:14:40.304 fused_ordering(570) 00:14:40.304 fused_ordering(571) 00:14:40.304 fused_ordering(572) 00:14:40.304 fused_ordering(573) 00:14:40.304 fused_ordering(574) 00:14:40.304 fused_ordering(575) 00:14:40.304 fused_ordering(576) 00:14:40.304 fused_ordering(577) 00:14:40.304 fused_ordering(578) 00:14:40.304 fused_ordering(579) 00:14:40.304 fused_ordering(580) 00:14:40.304 fused_ordering(581) 00:14:40.304 fused_ordering(582) 00:14:40.304 fused_ordering(583) 00:14:40.304 fused_ordering(584) 00:14:40.304 fused_ordering(585) 00:14:40.304 fused_ordering(586) 00:14:40.304 fused_ordering(587) 00:14:40.304 fused_ordering(588) 00:14:40.304 fused_ordering(589) 00:14:40.304 fused_ordering(590) 00:14:40.304 fused_ordering(591) 00:14:40.304 fused_ordering(592) 00:14:40.304 fused_ordering(593) 00:14:40.304 fused_ordering(594) 00:14:40.304 fused_ordering(595) 00:14:40.304 fused_ordering(596) 00:14:40.304 fused_ordering(597) 00:14:40.304 fused_ordering(598) 00:14:40.304 fused_ordering(599) 00:14:40.304 fused_ordering(600) 00:14:40.304 fused_ordering(601) 00:14:40.304 fused_ordering(602) 00:14:40.304 fused_ordering(603) 00:14:40.304 fused_ordering(604) 00:14:40.304 fused_ordering(605) 00:14:40.304 fused_ordering(606) 00:14:40.304 fused_ordering(607) 00:14:40.304 fused_ordering(608) 00:14:40.304 fused_ordering(609) 00:14:40.304 fused_ordering(610) 00:14:40.304 fused_ordering(611) 00:14:40.304 fused_ordering(612) 00:14:40.304 fused_ordering(613) 00:14:40.304 fused_ordering(614) 00:14:40.304 fused_ordering(615) 00:14:40.565 fused_ordering(616) 00:14:40.565 fused_ordering(617) 00:14:40.565 fused_ordering(618) 00:14:40.565 fused_ordering(619) 00:14:40.565 fused_ordering(620) 00:14:40.565 fused_ordering(621) 00:14:40.565 fused_ordering(622) 00:14:40.565 fused_ordering(623) 00:14:40.565 fused_ordering(624) 00:14:40.565 fused_ordering(625) 00:14:40.565 fused_ordering(626) 00:14:40.565 fused_ordering(627) 00:14:40.565 fused_ordering(628) 00:14:40.565 fused_ordering(629) 00:14:40.565 fused_ordering(630) 00:14:40.565 fused_ordering(631) 00:14:40.565 fused_ordering(632) 00:14:40.565 fused_ordering(633) 00:14:40.565 fused_ordering(634) 00:14:40.565 fused_ordering(635) 00:14:40.565 fused_ordering(636) 00:14:40.565 fused_ordering(637) 00:14:40.565 fused_ordering(638) 00:14:40.565 fused_ordering(639) 00:14:40.565 fused_ordering(640) 00:14:40.565 fused_ordering(641) 00:14:40.565 fused_ordering(642) 00:14:40.565 fused_ordering(643) 00:14:40.565 fused_ordering(644) 00:14:40.565 fused_ordering(645) 00:14:40.565 fused_ordering(646) 00:14:40.565 fused_ordering(647) 00:14:40.565 fused_ordering(648) 00:14:40.565 fused_ordering(649) 00:14:40.565 fused_ordering(650) 00:14:40.565 fused_ordering(651) 00:14:40.565 fused_ordering(652) 00:14:40.565 fused_ordering(653) 00:14:40.565 fused_ordering(654) 00:14:40.565 fused_ordering(655) 00:14:40.565 fused_ordering(656) 00:14:40.565 fused_ordering(657) 00:14:40.565 fused_ordering(658) 00:14:40.565 fused_ordering(659) 00:14:40.565 fused_ordering(660) 00:14:40.565 fused_ordering(661) 00:14:40.565 fused_ordering(662) 00:14:40.565 fused_ordering(663) 00:14:40.565 fused_ordering(664) 00:14:40.565 fused_ordering(665) 00:14:40.565 fused_ordering(666) 00:14:40.565 fused_ordering(667) 00:14:40.565 fused_ordering(668) 00:14:40.565 fused_ordering(669) 00:14:40.565 fused_ordering(670) 00:14:40.565 fused_ordering(671) 00:14:40.565 fused_ordering(672) 00:14:40.565 fused_ordering(673) 00:14:40.565 fused_ordering(674) 00:14:40.565 fused_ordering(675) 00:14:40.565 fused_ordering(676) 00:14:40.565 fused_ordering(677) 00:14:40.565 fused_ordering(678) 00:14:40.565 fused_ordering(679) 00:14:40.565 fused_ordering(680) 00:14:40.565 fused_ordering(681) 00:14:40.565 fused_ordering(682) 00:14:40.565 fused_ordering(683) 00:14:40.565 fused_ordering(684) 00:14:40.565 fused_ordering(685) 00:14:40.565 fused_ordering(686) 00:14:40.565 fused_ordering(687) 00:14:40.565 fused_ordering(688) 00:14:40.565 fused_ordering(689) 00:14:40.565 fused_ordering(690) 00:14:40.565 fused_ordering(691) 00:14:40.565 fused_ordering(692) 00:14:40.565 fused_ordering(693) 00:14:40.565 fused_ordering(694) 00:14:40.565 fused_ordering(695) 00:14:40.565 fused_ordering(696) 00:14:40.565 fused_ordering(697) 00:14:40.565 fused_ordering(698) 00:14:40.565 fused_ordering(699) 00:14:40.565 fused_ordering(700) 00:14:40.565 fused_ordering(701) 00:14:40.565 fused_ordering(702) 00:14:40.565 fused_ordering(703) 00:14:40.565 fused_ordering(704) 00:14:40.565 fused_ordering(705) 00:14:40.565 fused_ordering(706) 00:14:40.565 fused_ordering(707) 00:14:40.565 fused_ordering(708) 00:14:40.565 fused_ordering(709) 00:14:40.565 fused_ordering(710) 00:14:40.565 fused_ordering(711) 00:14:40.565 fused_ordering(712) 00:14:40.565 fused_ordering(713) 00:14:40.565 fused_ordering(714) 00:14:40.565 fused_ordering(715) 00:14:40.565 fused_ordering(716) 00:14:40.565 fused_ordering(717) 00:14:40.565 fused_ordering(718) 00:14:40.565 fused_ordering(719) 00:14:40.565 fused_ordering(720) 00:14:40.565 fused_ordering(721) 00:14:40.565 fused_ordering(722) 00:14:40.565 fused_ordering(723) 00:14:40.565 fused_ordering(724) 00:14:40.565 fused_ordering(725) 00:14:40.565 fused_ordering(726) 00:14:40.565 fused_ordering(727) 00:14:40.565 fused_ordering(728) 00:14:40.565 fused_ordering(729) 00:14:40.565 fused_ordering(730) 00:14:40.565 fused_ordering(731) 00:14:40.565 fused_ordering(732) 00:14:40.565 fused_ordering(733) 00:14:40.565 fused_ordering(734) 00:14:40.565 fused_ordering(735) 00:14:40.565 fused_ordering(736) 00:14:40.565 fused_ordering(737) 00:14:40.565 fused_ordering(738) 00:14:40.565 fused_ordering(739) 00:14:40.565 fused_ordering(740) 00:14:40.565 fused_ordering(741) 00:14:40.565 fused_ordering(742) 00:14:40.565 fused_ordering(743) 00:14:40.565 fused_ordering(744) 00:14:40.565 fused_ordering(745) 00:14:40.565 fused_ordering(746) 00:14:40.565 fused_ordering(747) 00:14:40.565 fused_ordering(748) 00:14:40.565 fused_ordering(749) 00:14:40.565 fused_ordering(750) 00:14:40.565 fused_ordering(751) 00:14:40.565 fused_ordering(752) 00:14:40.565 fused_ordering(753) 00:14:40.565 fused_ordering(754) 00:14:40.565 fused_ordering(755) 00:14:40.565 fused_ordering(756) 00:14:40.565 fused_ordering(757) 00:14:40.565 fused_ordering(758) 00:14:40.565 fused_ordering(759) 00:14:40.565 fused_ordering(760) 00:14:40.565 fused_ordering(761) 00:14:40.565 fused_ordering(762) 00:14:40.565 fused_ordering(763) 00:14:40.565 fused_ordering(764) 00:14:40.565 fused_ordering(765) 00:14:40.565 fused_ordering(766) 00:14:40.565 fused_ordering(767) 00:14:40.565 fused_ordering(768) 00:14:40.565 fused_ordering(769) 00:14:40.565 fused_ordering(770) 00:14:40.565 fused_ordering(771) 00:14:40.565 fused_ordering(772) 00:14:40.565 fused_ordering(773) 00:14:40.565 fused_ordering(774) 00:14:40.565 fused_ordering(775) 00:14:40.565 fused_ordering(776) 00:14:40.565 fused_ordering(777) 00:14:40.565 fused_ordering(778) 00:14:40.565 fused_ordering(779) 00:14:40.565 fused_ordering(780) 00:14:40.565 fused_ordering(781) 00:14:40.565 fused_ordering(782) 00:14:40.565 fused_ordering(783) 00:14:40.565 fused_ordering(784) 00:14:40.565 fused_ordering(785) 00:14:40.565 fused_ordering(786) 00:14:40.565 fused_ordering(787) 00:14:40.565 fused_ordering(788) 00:14:40.565 fused_ordering(789) 00:14:40.565 fused_ordering(790) 00:14:40.565 fused_ordering(791) 00:14:40.565 fused_ordering(792) 00:14:40.565 fused_ordering(793) 00:14:40.565 fused_ordering(794) 00:14:40.565 fused_ordering(795) 00:14:40.565 fused_ordering(796) 00:14:40.565 fused_ordering(797) 00:14:40.565 fused_ordering(798) 00:14:40.565 fused_ordering(799) 00:14:40.565 fused_ordering(800) 00:14:40.565 fused_ordering(801) 00:14:40.565 fused_ordering(802) 00:14:40.565 fused_ordering(803) 00:14:40.565 fused_ordering(804) 00:14:40.565 fused_ordering(805) 00:14:40.565 fused_ordering(806) 00:14:40.565 fused_ordering(807) 00:14:40.565 fused_ordering(808) 00:14:40.565 fused_ordering(809) 00:14:40.565 fused_ordering(810) 00:14:40.565 fused_ordering(811) 00:14:40.565 fused_ordering(812) 00:14:40.565 fused_ordering(813) 00:14:40.565 fused_ordering(814) 00:14:40.565 fused_ordering(815) 00:14:40.565 fused_ordering(816) 00:14:40.565 fused_ordering(817) 00:14:40.565 fused_ordering(818) 00:14:40.565 fused_ordering(819) 00:14:40.565 fused_ordering(820) 00:14:41.134 fused_o[2024-10-07 07:33:45.029690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x875de0 is same with the state(5) to be set 00:14:41.134 rdering(821) 00:14:41.134 fused_ordering(822) 00:14:41.134 fused_ordering(823) 00:14:41.134 fused_ordering(824) 00:14:41.134 fused_ordering(825) 00:14:41.134 fused_ordering(826) 00:14:41.134 fused_ordering(827) 00:14:41.134 fused_ordering(828) 00:14:41.134 fused_ordering(829) 00:14:41.134 fused_ordering(830) 00:14:41.134 fused_ordering(831) 00:14:41.134 fused_ordering(832) 00:14:41.134 fused_ordering(833) 00:14:41.134 fused_ordering(834) 00:14:41.134 fused_ordering(835) 00:14:41.134 fused_ordering(836) 00:14:41.134 fused_ordering(837) 00:14:41.134 fused_ordering(838) 00:14:41.134 fused_ordering(839) 00:14:41.134 fused_ordering(840) 00:14:41.134 fused_ordering(841) 00:14:41.134 fused_ordering(842) 00:14:41.134 fused_ordering(843) 00:14:41.134 fused_ordering(844) 00:14:41.134 fused_ordering(845) 00:14:41.134 fused_ordering(846) 00:14:41.134 fused_ordering(847) 00:14:41.134 fused_ordering(848) 00:14:41.134 fused_ordering(849) 00:14:41.134 fused_ordering(850) 00:14:41.134 fused_ordering(851) 00:14:41.134 fused_ordering(852) 00:14:41.134 fused_ordering(853) 00:14:41.134 fused_ordering(854) 00:14:41.134 fused_ordering(855) 00:14:41.134 fused_ordering(856) 00:14:41.134 fused_ordering(857) 00:14:41.134 fused_ordering(858) 00:14:41.134 fused_ordering(859) 00:14:41.134 fused_ordering(860) 00:14:41.134 fused_ordering(861) 00:14:41.134 fused_ordering(862) 00:14:41.134 fused_ordering(863) 00:14:41.134 fused_ordering(864) 00:14:41.134 fused_ordering(865) 00:14:41.134 fused_ordering(866) 00:14:41.134 fused_ordering(867) 00:14:41.134 fused_ordering(868) 00:14:41.134 fused_ordering(869) 00:14:41.134 fused_ordering(870) 00:14:41.134 fused_ordering(871) 00:14:41.134 fused_ordering(872) 00:14:41.134 fused_ordering(873) 00:14:41.134 fused_ordering(874) 00:14:41.134 fused_ordering(875) 00:14:41.134 fused_ordering(876) 00:14:41.134 fused_ordering(877) 00:14:41.134 fused_ordering(878) 00:14:41.134 fused_ordering(879) 00:14:41.134 fused_ordering(880) 00:14:41.134 fused_ordering(881) 00:14:41.134 fused_ordering(882) 00:14:41.134 fused_ordering(883) 00:14:41.134 fused_ordering(884) 00:14:41.134 fused_ordering(885) 00:14:41.134 fused_ordering(886) 00:14:41.134 fused_ordering(887) 00:14:41.134 fused_ordering(888) 00:14:41.134 fused_ordering(889) 00:14:41.134 fused_ordering(890) 00:14:41.134 fused_ordering(891) 00:14:41.134 fused_ordering(892) 00:14:41.134 fused_ordering(893) 00:14:41.134 fused_ordering(894) 00:14:41.134 fused_ordering(895) 00:14:41.134 fused_ordering(896) 00:14:41.134 fused_ordering(897) 00:14:41.134 fused_ordering(898) 00:14:41.134 fused_ordering(899) 00:14:41.134 fused_ordering(900) 00:14:41.134 fused_ordering(901) 00:14:41.134 fused_ordering(902) 00:14:41.134 fused_ordering(903) 00:14:41.134 fused_ordering(904) 00:14:41.134 fused_ordering(905) 00:14:41.134 fused_ordering(906) 00:14:41.134 fused_ordering(907) 00:14:41.134 fused_ordering(908) 00:14:41.134 fused_ordering(909) 00:14:41.134 fused_ordering(910) 00:14:41.134 fused_ordering(911) 00:14:41.134 fused_ordering(912) 00:14:41.134 fused_ordering(913) 00:14:41.134 fused_ordering(914) 00:14:41.134 fused_ordering(915) 00:14:41.134 fused_ordering(916) 00:14:41.134 fused_ordering(917) 00:14:41.134 fused_ordering(918) 00:14:41.134 fused_ordering(919) 00:14:41.134 fused_ordering(920) 00:14:41.134 fused_ordering(921) 00:14:41.134 fused_ordering(922) 00:14:41.134 fused_ordering(923) 00:14:41.134 fused_ordering(924) 00:14:41.134 fused_ordering(925) 00:14:41.134 fused_ordering(926) 00:14:41.134 fused_ordering(927) 00:14:41.134 fused_ordering(928) 00:14:41.134 fused_ordering(929) 00:14:41.134 fused_ordering(930) 00:14:41.134 fused_ordering(931) 00:14:41.134 fused_ordering(932) 00:14:41.134 fused_ordering(933) 00:14:41.134 fused_ordering(934) 00:14:41.134 fused_ordering(935) 00:14:41.134 fused_ordering(936) 00:14:41.134 fused_ordering(937) 00:14:41.134 fused_ordering(938) 00:14:41.134 fused_ordering(939) 00:14:41.134 fused_ordering(940) 00:14:41.134 fused_ordering(941) 00:14:41.134 fused_ordering(942) 00:14:41.134 fused_ordering(943) 00:14:41.134 fused_ordering(944) 00:14:41.134 fused_ordering(945) 00:14:41.134 fused_ordering(946) 00:14:41.134 fused_ordering(947) 00:14:41.134 fused_ordering(948) 00:14:41.134 fused_ordering(949) 00:14:41.134 fused_ordering(950) 00:14:41.134 fused_ordering(951) 00:14:41.134 fused_ordering(952) 00:14:41.134 fused_ordering(953) 00:14:41.134 fused_ordering(954) 00:14:41.134 fused_ordering(955) 00:14:41.134 fused_ordering(956) 00:14:41.134 fused_ordering(957) 00:14:41.134 fused_ordering(958) 00:14:41.134 fused_ordering(959) 00:14:41.134 fused_ordering(960) 00:14:41.134 fused_ordering(961) 00:14:41.134 fused_ordering(962) 00:14:41.134 fused_ordering(963) 00:14:41.134 fused_ordering(964) 00:14:41.134 fused_ordering(965) 00:14:41.134 fused_ordering(966) 00:14:41.134 fused_ordering(967) 00:14:41.134 fused_ordering(968) 00:14:41.134 fused_ordering(969) 00:14:41.134 fused_ordering(970) 00:14:41.134 fused_ordering(971) 00:14:41.134 fused_ordering(972) 00:14:41.134 fused_ordering(973) 00:14:41.134 fused_ordering(974) 00:14:41.134 fused_ordering(975) 00:14:41.134 fused_ordering(976) 00:14:41.134 fused_ordering(977) 00:14:41.134 fused_ordering(978) 00:14:41.134 fused_ordering(979) 00:14:41.134 fused_ordering(980) 00:14:41.134 fused_ordering(981) 00:14:41.134 fused_ordering(982) 00:14:41.134 fused_ordering(983) 00:14:41.134 fused_ordering(984) 00:14:41.134 fused_ordering(985) 00:14:41.135 fused_ordering(986) 00:14:41.135 fused_ordering(987) 00:14:41.135 fused_ordering(988) 00:14:41.135 fused_ordering(989) 00:14:41.135 fused_ordering(990) 00:14:41.135 fused_ordering(991) 00:14:41.135 fused_ordering(992) 00:14:41.135 fused_ordering(993) 00:14:41.135 fused_ordering(994) 00:14:41.135 fused_ordering(995) 00:14:41.135 fused_ordering(996) 00:14:41.135 fused_ordering(997) 00:14:41.135 fused_ordering(998) 00:14:41.135 fused_ordering(999) 00:14:41.135 fused_ordering(1000) 00:14:41.135 fused_ordering(1001) 00:14:41.135 fused_ordering(1002) 00:14:41.135 fused_ordering(1003) 00:14:41.135 fused_ordering(1004) 00:14:41.135 fused_ordering(1005) 00:14:41.135 fused_ordering(1006) 00:14:41.135 fused_ordering(1007) 00:14:41.135 fused_ordering(1008) 00:14:41.135 fused_ordering(1009) 00:14:41.135 fused_ordering(1010) 00:14:41.135 fused_ordering(1011) 00:14:41.135 fused_ordering(1012) 00:14:41.135 fused_ordering(1013) 00:14:41.135 fused_ordering(1014) 00:14:41.135 fused_ordering(1015) 00:14:41.135 fused_ordering(1016) 00:14:41.135 fused_ordering(1017) 00:14:41.135 fused_ordering(1018) 00:14:41.135 fused_ordering(1019) 00:14:41.135 fused_ordering(1020) 00:14:41.135 fused_ordering(1021) 00:14:41.135 fused_ordering(1022) 00:14:41.135 fused_ordering(1023) 00:14:41.135 07:33:45 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:41.135 07:33:45 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:41.135 07:33:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:41.135 07:33:45 -- nvmf/common.sh@116 -- # sync 00:14:41.135 07:33:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:41.135 07:33:45 -- nvmf/common.sh@119 -- # set +e 00:14:41.135 07:33:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:41.135 07:33:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:41.135 rmmod nvme_tcp 00:14:41.135 rmmod nvme_fabrics 00:14:41.135 rmmod nvme_keyring 00:14:41.135 07:33:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:41.135 07:33:45 -- nvmf/common.sh@123 -- # set -e 00:14:41.135 07:33:45 -- nvmf/common.sh@124 -- # return 0 00:14:41.135 07:33:45 -- nvmf/common.sh@477 -- # '[' -n 4071227 ']' 00:14:41.135 07:33:45 -- nvmf/common.sh@478 -- # killprocess 4071227 00:14:41.135 07:33:45 -- common/autotest_common.sh@926 -- # '[' -z 4071227 ']' 00:14:41.135 07:33:45 -- common/autotest_common.sh@930 -- # kill -0 4071227 00:14:41.135 07:33:45 -- common/autotest_common.sh@931 -- # uname 00:14:41.394 07:33:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:41.394 07:33:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4071227 00:14:41.394 07:33:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:41.394 07:33:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:41.394 07:33:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4071227' 00:14:41.394 killing process with pid 4071227 00:14:41.394 07:33:45 -- common/autotest_common.sh@945 -- # kill 4071227 00:14:41.394 07:33:45 -- common/autotest_common.sh@950 -- # wait 4071227 00:14:41.394 07:33:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:41.394 07:33:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:41.394 07:33:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:41.394 07:33:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.394 07:33:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:41.394 07:33:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.394 07:33:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.394 07:33:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.929 07:33:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:43.929 00:14:43.929 real 0m10.915s 00:14:43.929 user 0m5.684s 00:14:43.929 sys 0m5.658s 00:14:43.929 07:33:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.929 07:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:43.929 ************************************ 00:14:43.929 END TEST nvmf_fused_ordering 00:14:43.929 ************************************ 00:14:43.929 07:33:47 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:43.929 07:33:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:43.929 07:33:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:43.929 07:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:43.929 ************************************ 00:14:43.929 START TEST nvmf_delete_subsystem 00:14:43.929 ************************************ 00:14:43.929 07:33:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:43.929 * Looking for test storage... 00:14:43.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.929 07:33:47 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.929 07:33:47 -- nvmf/common.sh@7 -- # uname -s 00:14:43.929 07:33:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.929 07:33:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.929 07:33:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.929 07:33:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.929 07:33:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.929 07:33:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.929 07:33:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.929 07:33:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.929 07:33:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.929 07:33:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.929 07:33:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:14:43.929 07:33:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:14:43.929 07:33:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.929 07:33:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.929 07:33:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.929 07:33:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.929 07:33:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.929 07:33:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.929 07:33:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.930 07:33:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.930 07:33:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.930 07:33:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.930 07:33:47 -- paths/export.sh@5 -- # export PATH 00:14:43.930 07:33:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.930 07:33:47 -- nvmf/common.sh@46 -- # : 0 00:14:43.930 07:33:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:43.930 07:33:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:43.930 07:33:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:43.930 07:33:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.930 07:33:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.930 07:33:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:43.930 07:33:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:43.930 07:33:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:43.930 07:33:47 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:43.930 07:33:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:43.930 07:33:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.930 07:33:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:43.930 07:33:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:43.930 07:33:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:43.930 07:33:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.930 07:33:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.930 07:33:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.930 07:33:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:43.930 07:33:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:43.930 07:33:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:43.930 07:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:49.208 07:33:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:49.208 07:33:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:49.208 07:33:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:49.208 07:33:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:49.208 07:33:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:49.208 07:33:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:49.208 07:33:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:49.208 07:33:52 -- nvmf/common.sh@294 -- # net_devs=() 00:14:49.208 07:33:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:49.208 07:33:52 -- nvmf/common.sh@295 -- # e810=() 00:14:49.208 07:33:52 -- nvmf/common.sh@295 -- # local -ga e810 00:14:49.208 07:33:52 -- nvmf/common.sh@296 -- # x722=() 00:14:49.208 07:33:52 -- nvmf/common.sh@296 -- # local -ga x722 00:14:49.208 07:33:52 -- nvmf/common.sh@297 -- # mlx=() 00:14:49.208 07:33:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:49.208 07:33:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.208 07:33:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.208 07:33:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.208 07:33:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.208 07:33:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.208 07:33:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.208 07:33:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.208 07:33:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.208 07:33:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.208 07:33:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.208 07:33:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.208 07:33:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:49.208 07:33:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:49.208 07:33:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:49.208 07:33:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:49.208 07:33:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:49.209 07:33:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:49.209 07:33:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:49.209 07:33:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:49.209 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:49.209 07:33:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:49.209 07:33:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:49.209 07:33:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.209 07:33:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.209 07:33:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:49.209 07:33:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:49.209 07:33:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:49.209 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:49.209 07:33:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:49.209 07:33:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:49.209 07:33:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.209 07:33:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.209 07:33:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:49.209 07:33:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:49.209 07:33:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:49.209 07:33:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:49.209 07:33:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:49.209 07:33:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.209 07:33:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:49.209 07:33:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.209 07:33:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:49.209 Found net devices under 0000:af:00.0: cvl_0_0 00:14:49.209 07:33:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.209 07:33:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:49.209 07:33:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.209 07:33:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:49.209 07:33:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.209 07:33:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:49.209 Found net devices under 0000:af:00.1: cvl_0_1 00:14:49.209 07:33:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.209 07:33:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:49.209 07:33:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:49.209 07:33:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:49.209 07:33:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:49.209 07:33:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:49.209 07:33:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.209 07:33:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.209 07:33:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.209 07:33:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:49.209 07:33:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.209 07:33:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.209 07:33:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:49.209 07:33:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.209 07:33:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.209 07:33:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:49.209 07:33:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:49.209 07:33:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.209 07:33:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.209 07:33:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.209 07:33:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.209 07:33:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:49.209 07:33:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.209 07:33:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.209 07:33:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.209 07:33:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:49.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:14:49.209 00:14:49.209 --- 10.0.0.2 ping statistics --- 00:14:49.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.209 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:14:49.209 07:33:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:14:49.209 00:14:49.209 --- 10.0.0.1 ping statistics --- 00:14:49.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.209 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:14:49.209 07:33:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.209 07:33:53 -- nvmf/common.sh@410 -- # return 0 00:14:49.209 07:33:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:49.209 07:33:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.209 07:33:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:49.209 07:33:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:49.209 07:33:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.209 07:33:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:49.209 07:33:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:49.209 07:33:53 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:49.209 07:33:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:49.209 07:33:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:49.209 07:33:53 -- common/autotest_common.sh@10 -- # set +x 00:14:49.209 07:33:53 -- nvmf/common.sh@469 -- # nvmfpid=4075185 00:14:49.209 07:33:53 -- nvmf/common.sh@470 -- # waitforlisten 4075185 00:14:49.209 07:33:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:49.209 07:33:53 -- common/autotest_common.sh@819 -- # '[' -z 4075185 ']' 00:14:49.209 07:33:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.209 07:33:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:49.209 07:33:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.209 07:33:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:49.209 07:33:53 -- common/autotest_common.sh@10 -- # set +x 00:14:49.209 [2024-10-07 07:33:53.170932] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:49.209 [2024-10-07 07:33:53.170976] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.468 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.468 [2024-10-07 07:33:53.228482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:49.468 [2024-10-07 07:33:53.303927] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:49.468 [2024-10-07 07:33:53.304034] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.468 [2024-10-07 07:33:53.304044] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.468 [2024-10-07 07:33:53.304050] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.468 [2024-10-07 07:33:53.304095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.468 [2024-10-07 07:33:53.304098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.036 07:33:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:50.036 07:33:53 -- common/autotest_common.sh@852 -- # return 0 00:14:50.036 07:33:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:50.036 07:33:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:50.036 07:33:53 -- common/autotest_common.sh@10 -- # set +x 00:14:50.295 07:33:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.295 07:33:54 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:50.295 07:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.295 07:33:54 -- common/autotest_common.sh@10 -- # set +x 00:14:50.295 [2024-10-07 07:33:54.036290] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.295 07:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.295 07:33:54 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:50.295 07:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.295 07:33:54 -- common/autotest_common.sh@10 -- # set +x 00:14:50.295 07:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.295 07:33:54 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.295 07:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.295 07:33:54 -- common/autotest_common.sh@10 -- # set +x 00:14:50.295 [2024-10-07 07:33:54.056468] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.295 07:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.295 07:33:54 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:50.295 07:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.295 07:33:54 -- common/autotest_common.sh@10 -- # set +x 00:14:50.295 NULL1 00:14:50.295 07:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.295 07:33:54 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:50.295 07:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.295 07:33:54 -- common/autotest_common.sh@10 -- # set +x 00:14:50.295 Delay0 00:14:50.295 07:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.295 07:33:54 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.295 07:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.295 07:33:54 -- common/autotest_common.sh@10 -- # set +x 00:14:50.295 07:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.295 07:33:54 -- target/delete_subsystem.sh@28 -- # perf_pid=4075408 00:14:50.295 07:33:54 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:50.295 07:33:54 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:50.295 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.295 [2024-10-07 07:33:54.138021] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:52.198 07:33:56 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:52.198 07:33:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.198 07:33:56 -- common/autotest_common.sh@10 -- # set +x 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 [2024-10-07 07:33:56.299790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd03c60 is same with the state(5) to be set 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.457 Write completed with error (sct=0, sc=8) 00:14:52.457 starting I/O failed: -6 00:14:52.457 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Write completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 Read completed with error (sct=0, sc=8) 00:14:52.458 starting I/O failed: -6 00:14:53.394 [2024-10-07 07:33:57.273518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09b90 is same with the state(5) to be set 00:14:53.394 Write completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Write completed with error (sct=0, sc=8) 00:14:53.394 Write completed with error (sct=0, sc=8) 00:14:53.394 Write completed with error (sct=0, sc=8) 00:14:53.394 Write completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Write completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Write completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Write completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Write completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Write completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Write completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.394 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 [2024-10-07 07:33:57.301542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f094000c1d0 is same with the state(5) to be set 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 [2024-10-07 07:33:57.301952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef930 is same with the state(5) to be set 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 [2024-10-07 07:33:57.302107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd03de0 is same with the state(5) to be set 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Read completed with error (sct=0, sc=8) 00:14:53.395 Write completed with error (sct=0, sc=8) 00:14:53.395 [2024-10-07 07:33:57.302257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcefab0 is same with the state(5) to be set 00:14:53.395 [2024-10-07 07:33:57.302743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd09b90 (9): Bad file descriptor 00:14:53.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:53.395 07:33:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.395 07:33:57 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:53.395 07:33:57 -- target/delete_subsystem.sh@35 -- # kill -0 4075408 00:14:53.395 07:33:57 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:53.395 Initializing NVMe Controllers 00:14:53.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:53.395 Controller IO queue size 128, less than required. 00:14:53.395 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:53.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:53.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:53.395 Initialization complete. Launching workers. 00:14:53.395 ======================================================== 00:14:53.395 Latency(us) 00:14:53.395 Device Information : IOPS MiB/s Average min max 00:14:53.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 193.08 0.09 945693.84 979.96 1012545.71 00:14:53.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 177.69 0.09 848740.28 450.61 1011159.69 00:14:53.395 ======================================================== 00:14:53.395 Total : 370.77 0.18 899228.81 450.61 1012545.71 00:14:53.395 00:14:53.963 07:33:57 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:53.963 07:33:57 -- target/delete_subsystem.sh@35 -- # kill -0 4075408 00:14:53.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4075408) - No such process 00:14:53.963 07:33:57 -- target/delete_subsystem.sh@45 -- # NOT wait 4075408 00:14:53.963 07:33:57 -- common/autotest_common.sh@640 -- # local es=0 00:14:53.963 07:33:57 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 4075408 00:14:53.963 07:33:57 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:53.963 07:33:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:53.963 07:33:57 -- common/autotest_common.sh@632 -- # type -t wait 00:14:53.963 07:33:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:53.963 07:33:57 -- common/autotest_common.sh@643 -- # wait 4075408 00:14:53.963 07:33:57 -- common/autotest_common.sh@643 -- # es=1 00:14:53.963 07:33:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:53.963 07:33:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:53.963 07:33:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:53.963 07:33:57 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:53.963 07:33:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.963 07:33:57 -- common/autotest_common.sh@10 -- # set +x 00:14:53.963 07:33:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.963 07:33:57 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.963 07:33:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.963 07:33:57 -- common/autotest_common.sh@10 -- # set +x 00:14:53.963 [2024-10-07 07:33:57.830826] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.963 07:33:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.963 07:33:57 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.963 07:33:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.963 07:33:57 -- common/autotest_common.sh@10 -- # set +x 00:14:53.963 07:33:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.963 07:33:57 -- target/delete_subsystem.sh@54 -- # perf_pid=4076081 00:14:53.963 07:33:57 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:53.963 07:33:57 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:53.963 07:33:57 -- target/delete_subsystem.sh@57 -- # kill -0 4076081 00:14:53.963 07:33:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:53.963 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.963 [2024-10-07 07:33:57.890222] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:54.535 07:33:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:54.535 07:33:58 -- target/delete_subsystem.sh@57 -- # kill -0 4076081 00:14:54.535 07:33:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:55.100 07:33:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:55.100 07:33:58 -- target/delete_subsystem.sh@57 -- # kill -0 4076081 00:14:55.100 07:33:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:55.664 07:33:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:55.664 07:33:59 -- target/delete_subsystem.sh@57 -- # kill -0 4076081 00:14:55.664 07:33:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:55.921 07:33:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:55.921 07:33:59 -- target/delete_subsystem.sh@57 -- # kill -0 4076081 00:14:55.921 07:33:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:56.488 07:34:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:56.488 07:34:00 -- target/delete_subsystem.sh@57 -- # kill -0 4076081 00:14:56.488 07:34:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:57.057 07:34:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:57.057 07:34:00 -- target/delete_subsystem.sh@57 -- # kill -0 4076081 00:14:57.057 07:34:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:57.057 Initializing NVMe Controllers 00:14:57.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:57.057 Controller IO queue size 128, less than required. 00:14:57.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:57.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:57.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:57.057 Initialization complete. Launching workers. 00:14:57.057 ======================================================== 00:14:57.057 Latency(us) 00:14:57.057 Device Information : IOPS MiB/s Average min max 00:14:57.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002999.80 1000161.55 1010574.33 00:14:57.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005366.52 1000174.52 1042322.47 00:14:57.057 ======================================================== 00:14:57.057 Total : 256.00 0.12 1004183.16 1000161.55 1042322.47 00:14:57.057 00:14:57.627 07:34:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:57.627 07:34:01 -- target/delete_subsystem.sh@57 -- # kill -0 4076081 00:14:57.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4076081) - No such process 00:14:57.627 07:34:01 -- target/delete_subsystem.sh@67 -- # wait 4076081 00:14:57.627 07:34:01 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:57.627 07:34:01 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:57.627 07:34:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:57.627 07:34:01 -- nvmf/common.sh@116 -- # sync 00:14:57.627 07:34:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:57.627 07:34:01 -- nvmf/common.sh@119 -- # set +e 00:14:57.627 07:34:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:57.627 07:34:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:57.627 rmmod nvme_tcp 00:14:57.627 rmmod nvme_fabrics 00:14:57.627 rmmod nvme_keyring 00:14:57.627 07:34:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:57.627 07:34:01 -- nvmf/common.sh@123 -- # set -e 00:14:57.627 07:34:01 -- nvmf/common.sh@124 -- # return 0 00:14:57.627 07:34:01 -- nvmf/common.sh@477 -- # '[' -n 4075185 ']' 00:14:57.627 07:34:01 -- nvmf/common.sh@478 -- # killprocess 4075185 00:14:57.627 07:34:01 -- common/autotest_common.sh@926 -- # '[' -z 4075185 ']' 00:14:57.627 07:34:01 -- common/autotest_common.sh@930 -- # kill -0 4075185 00:14:57.627 07:34:01 -- common/autotest_common.sh@931 -- # uname 00:14:57.627 07:34:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:57.627 07:34:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4075185 00:14:57.627 07:34:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:57.627 07:34:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:57.627 07:34:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4075185' 00:14:57.627 killing process with pid 4075185 00:14:57.627 07:34:01 -- common/autotest_common.sh@945 -- # kill 4075185 00:14:57.627 07:34:01 -- common/autotest_common.sh@950 -- # wait 4075185 00:14:57.886 07:34:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:57.886 07:34:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:57.886 07:34:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:57.886 07:34:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.886 07:34:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:57.886 07:34:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.886 07:34:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.886 07:34:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.420 07:34:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:00.420 00:15:00.420 real 0m16.318s 00:15:00.420 user 0m30.440s 00:15:00.420 sys 0m5.147s 00:15:00.420 07:34:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.420 07:34:03 -- common/autotest_common.sh@10 -- # set +x 00:15:00.420 ************************************ 00:15:00.420 END TEST nvmf_delete_subsystem 00:15:00.420 ************************************ 00:15:00.420 07:34:03 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:00.420 07:34:03 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:00.420 07:34:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:00.420 07:34:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:00.420 07:34:03 -- common/autotest_common.sh@10 -- # set +x 00:15:00.420 ************************************ 00:15:00.420 START TEST nvmf_nvme_cli 00:15:00.420 ************************************ 00:15:00.420 07:34:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:00.420 * Looking for test storage... 00:15:00.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.420 07:34:03 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.420 07:34:03 -- nvmf/common.sh@7 -- # uname -s 00:15:00.420 07:34:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.420 07:34:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.420 07:34:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.420 07:34:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.420 07:34:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.420 07:34:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.420 07:34:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.420 07:34:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.420 07:34:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.420 07:34:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.420 07:34:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:00.420 07:34:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:00.420 07:34:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.420 07:34:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.420 07:34:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.420 07:34:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.420 07:34:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.420 07:34:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.420 07:34:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.420 07:34:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.420 07:34:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.420 07:34:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.420 07:34:03 -- paths/export.sh@5 -- # export PATH 00:15:00.420 07:34:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.420 07:34:03 -- nvmf/common.sh@46 -- # : 0 00:15:00.420 07:34:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:00.420 07:34:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:00.420 07:34:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:00.420 07:34:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.420 07:34:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.420 07:34:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:00.420 07:34:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:00.420 07:34:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:00.420 07:34:03 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:00.420 07:34:03 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:00.420 07:34:03 -- target/nvme_cli.sh@14 -- # devs=() 00:15:00.420 07:34:03 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:00.420 07:34:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:00.420 07:34:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.420 07:34:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:00.420 07:34:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:00.420 07:34:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:00.420 07:34:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.420 07:34:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.420 07:34:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.420 07:34:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:00.420 07:34:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:00.420 07:34:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:00.420 07:34:03 -- common/autotest_common.sh@10 -- # set +x 00:15:05.696 07:34:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:05.696 07:34:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:05.696 07:34:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:05.696 07:34:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:05.697 07:34:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:05.697 07:34:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:05.697 07:34:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:05.697 07:34:09 -- nvmf/common.sh@294 -- # net_devs=() 00:15:05.697 07:34:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:05.697 07:34:09 -- nvmf/common.sh@295 -- # e810=() 00:15:05.697 07:34:09 -- nvmf/common.sh@295 -- # local -ga e810 00:15:05.697 07:34:09 -- nvmf/common.sh@296 -- # x722=() 00:15:05.697 07:34:09 -- nvmf/common.sh@296 -- # local -ga x722 00:15:05.697 07:34:09 -- nvmf/common.sh@297 -- # mlx=() 00:15:05.697 07:34:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:05.697 07:34:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:05.697 07:34:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:05.697 07:34:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:05.697 07:34:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:05.697 07:34:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:05.697 07:34:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:05.697 07:34:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:05.697 07:34:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:05.697 07:34:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:05.697 07:34:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:05.697 07:34:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:05.697 07:34:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:05.697 07:34:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:05.697 07:34:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:05.697 07:34:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:05.697 07:34:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:05.697 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:05.697 07:34:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:05.697 07:34:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:05.697 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:05.697 07:34:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:05.697 07:34:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:05.697 07:34:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.697 07:34:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:05.697 07:34:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.697 07:34:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:05.697 Found net devices under 0000:af:00.0: cvl_0_0 00:15:05.697 07:34:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.697 07:34:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:05.697 07:34:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.697 07:34:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:05.697 07:34:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.697 07:34:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:05.697 Found net devices under 0000:af:00.1: cvl_0_1 00:15:05.697 07:34:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.697 07:34:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:05.697 07:34:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:05.697 07:34:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:05.697 07:34:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.697 07:34:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.697 07:34:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:05.697 07:34:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:05.697 07:34:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:05.697 07:34:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:05.697 07:34:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:05.697 07:34:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:05.697 07:34:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.697 07:34:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:05.697 07:34:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:05.697 07:34:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:05.697 07:34:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:05.697 07:34:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:05.697 07:34:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:05.697 07:34:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:05.697 07:34:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:05.697 07:34:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:05.697 07:34:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:05.697 07:34:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:05.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:15:05.697 00:15:05.697 --- 10.0.0.2 ping statistics --- 00:15:05.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.697 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:15:05.697 07:34:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:05.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:15:05.697 00:15:05.697 --- 10.0.0.1 ping statistics --- 00:15:05.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.697 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:15:05.697 07:34:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.697 07:34:09 -- nvmf/common.sh@410 -- # return 0 00:15:05.697 07:34:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:05.697 07:34:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.697 07:34:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:05.697 07:34:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.697 07:34:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:05.697 07:34:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:05.697 07:34:09 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:05.697 07:34:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:05.697 07:34:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:05.697 07:34:09 -- common/autotest_common.sh@10 -- # set +x 00:15:05.697 07:34:09 -- nvmf/common.sh@469 -- # nvmfpid=4080029 00:15:05.697 07:34:09 -- nvmf/common.sh@470 -- # waitforlisten 4080029 00:15:05.697 07:34:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:05.697 07:34:09 -- common/autotest_common.sh@819 -- # '[' -z 4080029 ']' 00:15:05.697 07:34:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.697 07:34:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:05.697 07:34:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.697 07:34:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:05.697 07:34:09 -- common/autotest_common.sh@10 -- # set +x 00:15:05.697 [2024-10-07 07:34:09.653716] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:05.697 [2024-10-07 07:34:09.653760] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.957 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.957 [2024-10-07 07:34:09.716487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:05.957 [2024-10-07 07:34:09.800147] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:05.957 [2024-10-07 07:34:09.800269] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.957 [2024-10-07 07:34:09.800279] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.957 [2024-10-07 07:34:09.800288] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.957 [2024-10-07 07:34:09.800340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.957 [2024-10-07 07:34:09.800441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.957 [2024-10-07 07:34:09.800458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.957 [2024-10-07 07:34:09.800459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.527 07:34:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:06.527 07:34:10 -- common/autotest_common.sh@852 -- # return 0 00:15:06.527 07:34:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:06.527 07:34:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:06.527 07:34:10 -- common/autotest_common.sh@10 -- # set +x 00:15:06.786 07:34:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.786 07:34:10 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:06.786 07:34:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.786 07:34:10 -- common/autotest_common.sh@10 -- # set +x 00:15:06.786 [2024-10-07 07:34:10.512288] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.786 07:34:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.786 07:34:10 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:06.786 07:34:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.786 07:34:10 -- common/autotest_common.sh@10 -- # set +x 00:15:06.786 Malloc0 00:15:06.786 07:34:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.786 07:34:10 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:06.786 07:34:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.786 07:34:10 -- common/autotest_common.sh@10 -- # set +x 00:15:06.786 Malloc1 00:15:06.786 07:34:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.786 07:34:10 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:06.786 07:34:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.786 07:34:10 -- common/autotest_common.sh@10 -- # set +x 00:15:06.786 07:34:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.786 07:34:10 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:06.786 07:34:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.786 07:34:10 -- common/autotest_common.sh@10 -- # set +x 00:15:06.786 07:34:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.786 07:34:10 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:06.786 07:34:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.786 07:34:10 -- common/autotest_common.sh@10 -- # set +x 00:15:06.786 07:34:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.786 07:34:10 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.786 07:34:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.786 07:34:10 -- common/autotest_common.sh@10 -- # set +x 00:15:06.786 [2024-10-07 07:34:10.589085] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.786 07:34:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.786 07:34:10 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:06.786 07:34:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.786 07:34:10 -- common/autotest_common.sh@10 -- # set +x 00:15:06.786 07:34:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.786 07:34:10 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:07.045 00:15:07.046 Discovery Log Number of Records 2, Generation counter 2 00:15:07.046 =====Discovery Log Entry 0====== 00:15:07.046 trtype: tcp 00:15:07.046 adrfam: ipv4 00:15:07.046 subtype: current discovery subsystem 00:15:07.046 treq: not required 00:15:07.046 portid: 0 00:15:07.046 trsvcid: 4420 00:15:07.046 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:07.046 traddr: 10.0.0.2 00:15:07.046 eflags: explicit discovery connections, duplicate discovery information 00:15:07.046 sectype: none 00:15:07.046 =====Discovery Log Entry 1====== 00:15:07.046 trtype: tcp 00:15:07.046 adrfam: ipv4 00:15:07.046 subtype: nvme subsystem 00:15:07.046 treq: not required 00:15:07.046 portid: 0 00:15:07.046 trsvcid: 4420 00:15:07.046 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:07.046 traddr: 10.0.0.2 00:15:07.046 eflags: none 00:15:07.046 sectype: none 00:15:07.046 07:34:10 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:07.046 07:34:10 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:07.046 07:34:10 -- nvmf/common.sh@510 -- # local dev _ 00:15:07.046 07:34:10 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:07.046 07:34:10 -- nvmf/common.sh@509 -- # nvme list 00:15:07.046 07:34:10 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:07.046 07:34:10 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:07.046 07:34:10 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:07.046 07:34:10 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:07.046 07:34:10 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:07.046 07:34:10 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:07.983 07:34:11 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:07.983 07:34:11 -- common/autotest_common.sh@1177 -- # local i=0 00:15:07.983 07:34:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.983 07:34:11 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:15:07.983 07:34:11 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:15:07.983 07:34:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:10.004 07:34:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:10.004 07:34:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:10.004 07:34:13 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.004 07:34:13 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:15:10.004 07:34:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.004 07:34:13 -- common/autotest_common.sh@1187 -- # return 0 00:15:10.004 07:34:13 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:10.004 07:34:13 -- nvmf/common.sh@510 -- # local dev _ 00:15:10.004 07:34:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:10.004 07:34:13 -- nvmf/common.sh@509 -- # nvme list 00:15:10.004 07:34:13 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:10.004 07:34:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:10.004 07:34:13 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:10.004 07:34:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:10.004 07:34:13 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:10.004 07:34:13 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:10.004 07:34:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:10.004 07:34:13 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:10.004 07:34:13 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:10.004 07:34:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:10.004 07:34:13 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:10.004 /dev/nvme0n2 ]] 00:15:10.004 07:34:13 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:10.004 07:34:13 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:10.004 07:34:13 -- nvmf/common.sh@510 -- # local dev _ 00:15:10.004 07:34:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:10.004 07:34:13 -- nvmf/common.sh@509 -- # nvme list 00:15:10.004 07:34:13 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:10.004 07:34:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:10.004 07:34:13 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:10.004 07:34:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:10.264 07:34:13 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:10.264 07:34:13 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:10.264 07:34:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:10.264 07:34:13 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:10.264 07:34:13 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:10.264 07:34:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:10.264 07:34:13 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:10.264 07:34:13 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:10.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.264 07:34:14 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:10.264 07:34:14 -- common/autotest_common.sh@1198 -- # local i=0 00:15:10.264 07:34:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:10.264 07:34:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.264 07:34:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:10.264 07:34:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.264 07:34:14 -- common/autotest_common.sh@1210 -- # return 0 00:15:10.264 07:34:14 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:10.264 07:34:14 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.264 07:34:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.264 07:34:14 -- common/autotest_common.sh@10 -- # set +x 00:15:10.264 07:34:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.264 07:34:14 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:10.264 07:34:14 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:10.264 07:34:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:10.264 07:34:14 -- nvmf/common.sh@116 -- # sync 00:15:10.264 07:34:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:10.264 07:34:14 -- nvmf/common.sh@119 -- # set +e 00:15:10.264 07:34:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:10.264 07:34:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:10.264 rmmod nvme_tcp 00:15:10.264 rmmod nvme_fabrics 00:15:10.264 rmmod nvme_keyring 00:15:10.523 07:34:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:10.523 07:34:14 -- nvmf/common.sh@123 -- # set -e 00:15:10.523 07:34:14 -- nvmf/common.sh@124 -- # return 0 00:15:10.523 07:34:14 -- nvmf/common.sh@477 -- # '[' -n 4080029 ']' 00:15:10.523 07:34:14 -- nvmf/common.sh@478 -- # killprocess 4080029 00:15:10.523 07:34:14 -- common/autotest_common.sh@926 -- # '[' -z 4080029 ']' 00:15:10.523 07:34:14 -- common/autotest_common.sh@930 -- # kill -0 4080029 00:15:10.523 07:34:14 -- common/autotest_common.sh@931 -- # uname 00:15:10.523 07:34:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:10.523 07:34:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4080029 00:15:10.523 07:34:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:10.523 07:34:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:10.523 07:34:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4080029' 00:15:10.523 killing process with pid 4080029 00:15:10.523 07:34:14 -- common/autotest_common.sh@945 -- # kill 4080029 00:15:10.523 07:34:14 -- common/autotest_common.sh@950 -- # wait 4080029 00:15:10.783 07:34:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:10.783 07:34:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:10.783 07:34:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:10.783 07:34:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:10.783 07:34:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:10.783 07:34:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.783 07:34:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.783 07:34:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.688 07:34:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:12.688 00:15:12.688 real 0m12.798s 00:15:12.688 user 0m20.453s 00:15:12.688 sys 0m4.890s 00:15:12.688 07:34:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:12.688 07:34:16 -- common/autotest_common.sh@10 -- # set +x 00:15:12.688 ************************************ 00:15:12.688 END TEST nvmf_nvme_cli 00:15:12.688 ************************************ 00:15:12.688 07:34:16 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:15:12.689 07:34:16 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:12.689 07:34:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:12.689 07:34:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:12.689 07:34:16 -- common/autotest_common.sh@10 -- # set +x 00:15:12.948 ************************************ 00:15:12.948 START TEST nvmf_host_management 00:15:12.948 ************************************ 00:15:12.948 07:34:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:12.948 * Looking for test storage... 00:15:12.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.949 07:34:16 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.949 07:34:16 -- nvmf/common.sh@7 -- # uname -s 00:15:12.949 07:34:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.949 07:34:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.949 07:34:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.949 07:34:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.949 07:34:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.949 07:34:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.949 07:34:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.949 07:34:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.949 07:34:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.949 07:34:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.949 07:34:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:12.949 07:34:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:12.949 07:34:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.949 07:34:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.949 07:34:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.949 07:34:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.949 07:34:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.949 07:34:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.949 07:34:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.949 07:34:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.949 07:34:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.949 07:34:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.949 07:34:16 -- paths/export.sh@5 -- # export PATH 00:15:12.949 07:34:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.949 07:34:16 -- nvmf/common.sh@46 -- # : 0 00:15:12.949 07:34:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:12.949 07:34:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:12.949 07:34:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:12.949 07:34:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.949 07:34:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.949 07:34:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:12.949 07:34:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:12.949 07:34:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:12.949 07:34:16 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:12.949 07:34:16 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:12.949 07:34:16 -- target/host_management.sh@104 -- # nvmftestinit 00:15:12.949 07:34:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:12.949 07:34:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.949 07:34:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:12.949 07:34:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:12.949 07:34:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:12.949 07:34:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.949 07:34:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.949 07:34:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.949 07:34:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:12.949 07:34:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:12.949 07:34:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:12.949 07:34:16 -- common/autotest_common.sh@10 -- # set +x 00:15:18.223 07:34:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:18.223 07:34:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:18.223 07:34:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:18.223 07:34:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:18.223 07:34:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:18.223 07:34:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:18.223 07:34:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:18.223 07:34:21 -- nvmf/common.sh@294 -- # net_devs=() 00:15:18.223 07:34:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:18.223 07:34:21 -- nvmf/common.sh@295 -- # e810=() 00:15:18.223 07:34:21 -- nvmf/common.sh@295 -- # local -ga e810 00:15:18.223 07:34:21 -- nvmf/common.sh@296 -- # x722=() 00:15:18.223 07:34:21 -- nvmf/common.sh@296 -- # local -ga x722 00:15:18.223 07:34:21 -- nvmf/common.sh@297 -- # mlx=() 00:15:18.223 07:34:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:18.223 07:34:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:18.223 07:34:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:18.223 07:34:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:18.223 07:34:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:18.223 07:34:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:18.223 07:34:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:18.223 07:34:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:18.223 07:34:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:18.223 07:34:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:18.223 07:34:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:18.223 07:34:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:18.223 07:34:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:18.223 07:34:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:18.223 07:34:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:18.223 07:34:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:18.223 07:34:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:18.223 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:18.223 07:34:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:18.223 07:34:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:18.223 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:18.223 07:34:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:18.223 07:34:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:18.223 07:34:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.223 07:34:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:18.223 07:34:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.223 07:34:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:18.223 Found net devices under 0000:af:00.0: cvl_0_0 00:15:18.223 07:34:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.223 07:34:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:18.223 07:34:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.223 07:34:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:18.223 07:34:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.223 07:34:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:18.223 Found net devices under 0000:af:00.1: cvl_0_1 00:15:18.223 07:34:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.223 07:34:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:18.223 07:34:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:18.223 07:34:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:18.223 07:34:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.223 07:34:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:18.223 07:34:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:18.223 07:34:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:18.223 07:34:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:18.223 07:34:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:18.223 07:34:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:18.223 07:34:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:18.223 07:34:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.223 07:34:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:18.223 07:34:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:18.223 07:34:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:18.223 07:34:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:18.223 07:34:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:18.223 07:34:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:18.223 07:34:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:18.223 07:34:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:18.223 07:34:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:18.223 07:34:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:18.223 07:34:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:18.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:15:18.223 00:15:18.223 --- 10.0.0.2 ping statistics --- 00:15:18.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.223 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:15:18.223 07:34:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:18.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:15:18.223 00:15:18.223 --- 10.0.0.1 ping statistics --- 00:15:18.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.223 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:15:18.223 07:34:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.223 07:34:21 -- nvmf/common.sh@410 -- # return 0 00:15:18.223 07:34:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:18.223 07:34:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.223 07:34:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:18.223 07:34:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.223 07:34:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:18.223 07:34:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:18.223 07:34:21 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:15:18.223 07:34:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:18.223 07:34:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:18.223 07:34:21 -- common/autotest_common.sh@10 -- # set +x 00:15:18.223 ************************************ 00:15:18.223 START TEST nvmf_host_management 00:15:18.223 ************************************ 00:15:18.223 07:34:21 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:15:18.223 07:34:21 -- target/host_management.sh@69 -- # starttarget 00:15:18.223 07:34:21 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:18.223 07:34:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:18.223 07:34:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:18.223 07:34:21 -- common/autotest_common.sh@10 -- # set +x 00:15:18.223 07:34:21 -- nvmf/common.sh@469 -- # nvmfpid=4084238 00:15:18.223 07:34:21 -- nvmf/common.sh@470 -- # waitforlisten 4084238 00:15:18.223 07:34:21 -- common/autotest_common.sh@819 -- # '[' -z 4084238 ']' 00:15:18.223 07:34:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.223 07:34:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:18.223 07:34:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.223 07:34:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:18.224 07:34:21 -- common/autotest_common.sh@10 -- # set +x 00:15:18.224 07:34:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:18.224 [2024-10-07 07:34:21.828757] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:18.224 [2024-10-07 07:34:21.828799] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.224 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.224 [2024-10-07 07:34:21.887493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.224 [2024-10-07 07:34:21.963484] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:18.224 [2024-10-07 07:34:21.963589] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.224 [2024-10-07 07:34:21.963597] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.224 [2024-10-07 07:34:21.963604] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.224 [2024-10-07 07:34:21.963640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.224 [2024-10-07 07:34:21.963726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.224 [2024-10-07 07:34:21.963833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.224 [2024-10-07 07:34:21.963834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:18.792 07:34:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:18.792 07:34:22 -- common/autotest_common.sh@852 -- # return 0 00:15:18.792 07:34:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:18.792 07:34:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:18.792 07:34:22 -- common/autotest_common.sh@10 -- # set +x 00:15:18.792 07:34:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.792 07:34:22 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:18.792 07:34:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.792 07:34:22 -- common/autotest_common.sh@10 -- # set +x 00:15:18.792 [2024-10-07 07:34:22.678371] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.792 07:34:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.792 07:34:22 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:18.792 07:34:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:18.792 07:34:22 -- common/autotest_common.sh@10 -- # set +x 00:15:18.792 07:34:22 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:18.792 07:34:22 -- target/host_management.sh@23 -- # cat 00:15:18.792 07:34:22 -- target/host_management.sh@30 -- # rpc_cmd 00:15:18.792 07:34:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.792 07:34:22 -- common/autotest_common.sh@10 -- # set +x 00:15:18.792 Malloc0 00:15:18.792 [2024-10-07 07:34:22.738021] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.792 07:34:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.792 07:34:22 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:18.792 07:34:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:18.792 07:34:22 -- common/autotest_common.sh@10 -- # set +x 00:15:19.052 07:34:22 -- target/host_management.sh@73 -- # perfpid=4084505 00:15:19.052 07:34:22 -- target/host_management.sh@74 -- # waitforlisten 4084505 /var/tmp/bdevperf.sock 00:15:19.052 07:34:22 -- common/autotest_common.sh@819 -- # '[' -z 4084505 ']' 00:15:19.052 07:34:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:19.052 07:34:22 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:19.052 07:34:22 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:19.052 07:34:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:19.052 07:34:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:19.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:19.052 07:34:22 -- nvmf/common.sh@520 -- # config=() 00:15:19.052 07:34:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:19.052 07:34:22 -- nvmf/common.sh@520 -- # local subsystem config 00:15:19.052 07:34:22 -- common/autotest_common.sh@10 -- # set +x 00:15:19.052 07:34:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:19.052 07:34:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:19.052 { 00:15:19.052 "params": { 00:15:19.052 "name": "Nvme$subsystem", 00:15:19.052 "trtype": "$TEST_TRANSPORT", 00:15:19.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:19.052 "adrfam": "ipv4", 00:15:19.052 "trsvcid": "$NVMF_PORT", 00:15:19.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:19.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:19.052 "hdgst": ${hdgst:-false}, 00:15:19.052 "ddgst": ${ddgst:-false} 00:15:19.052 }, 00:15:19.052 "method": "bdev_nvme_attach_controller" 00:15:19.052 } 00:15:19.052 EOF 00:15:19.052 )") 00:15:19.052 07:34:22 -- nvmf/common.sh@542 -- # cat 00:15:19.052 07:34:22 -- nvmf/common.sh@544 -- # jq . 00:15:19.052 07:34:22 -- nvmf/common.sh@545 -- # IFS=, 00:15:19.052 07:34:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:19.052 "params": { 00:15:19.052 "name": "Nvme0", 00:15:19.052 "trtype": "tcp", 00:15:19.052 "traddr": "10.0.0.2", 00:15:19.052 "adrfam": "ipv4", 00:15:19.052 "trsvcid": "4420", 00:15:19.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:19.052 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:19.052 "hdgst": false, 00:15:19.052 "ddgst": false 00:15:19.052 }, 00:15:19.052 "method": "bdev_nvme_attach_controller" 00:15:19.052 }' 00:15:19.052 [2024-10-07 07:34:22.826254] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:19.052 [2024-10-07 07:34:22.826298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4084505 ] 00:15:19.052 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.052 [2024-10-07 07:34:22.881371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.052 [2024-10-07 07:34:22.950001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.311 Running I/O for 10 seconds... 00:15:19.881 07:34:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:19.881 07:34:23 -- common/autotest_common.sh@852 -- # return 0 00:15:19.881 07:34:23 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:19.881 07:34:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.881 07:34:23 -- common/autotest_common.sh@10 -- # set +x 00:15:19.881 07:34:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.881 07:34:23 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:19.881 07:34:23 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:19.881 07:34:23 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:19.881 07:34:23 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:19.881 07:34:23 -- target/host_management.sh@52 -- # local ret=1 00:15:19.881 07:34:23 -- target/host_management.sh@53 -- # local i 00:15:19.881 07:34:23 -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:19.881 07:34:23 -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:19.881 07:34:23 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:19.881 07:34:23 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:19.881 07:34:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.881 07:34:23 -- common/autotest_common.sh@10 -- # set +x 00:15:19.881 07:34:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.881 07:34:23 -- target/host_management.sh@55 -- # read_io_count=1446 00:15:19.881 07:34:23 -- target/host_management.sh@58 -- # '[' 1446 -ge 100 ']' 00:15:19.881 07:34:23 -- target/host_management.sh@59 -- # ret=0 00:15:19.881 07:34:23 -- target/host_management.sh@60 -- # break 00:15:19.881 07:34:23 -- target/host_management.sh@64 -- # return 0 00:15:19.881 07:34:23 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:19.881 07:34:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.881 07:34:23 -- common/autotest_common.sh@10 -- # set +x 00:15:19.881 [2024-10-07 07:34:23.713455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.713764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1291a30 is same with the state(5) to be set 00:15:19.881 [2024-10-07 07:34:23.714087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.881 [2024-10-07 07:34:23.714121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.881 [2024-10-07 07:34:23.714139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.881 [2024-10-07 07:34:23.714147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.881 [2024-10-07 07:34:23.714156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.881 [2024-10-07 07:34:23.714162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.881 [2024-10-07 07:34:23.714170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.881 [2024-10-07 07:34:23.714178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.881 [2024-10-07 07:34:23.714187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.881 [2024-10-07 07:34:23.714193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.881 [2024-10-07 07:34:23.714202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.881 [2024-10-07 07:34:23.714208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.881 [2024-10-07 07:34:23.714216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.882 [2024-10-07 07:34:23.714664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.882 [2024-10-07 07:34:23.714681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.714990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.714997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.715004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.715012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.715019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.715026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.715033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.715041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.715049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.715057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.715070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.715078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.883 [2024-10-07 07:34:23.715085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.715159] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe9fe10 was disconnected and freed. reset controller. 00:15:19.883 [2024-10-07 07:34:23.716051] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:19.883 task offset: 63616 on job bdev=Nvme0n1 fails 00:15:19.883 00:15:19.883 Latency(us) 00:15:19.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.883 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:19.883 Job: Nvme0n1 ended in about 0.49 seconds with error 00:15:19.883 Verification LBA range: start 0x0 length 0x400 00:15:19.883 Nvme0n1 : 0.49 3158.00 197.37 130.48 0.00 19202.82 1341.93 25465.42 00:15:19.883 =================================================================================================================== 00:15:19.883 Total : 3158.00 197.37 130.48 0.00 19202.82 1341.93 25465.42 00:15:19.883 [2024-10-07 07:34:23.717608] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:19.883 [2024-10-07 07:34:23.717625] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea22a0 (9): Bad file descriptor 00:15:19.883 07:34:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.883 07:34:23 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:19.883 07:34:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.883 07:34:23 -- common/autotest_common.sh@10 -- # set +x 00:15:19.883 [2024-10-07 07:34:23.720965] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:15:19.883 [2024-10-07 07:34:23.721134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:19.883 [2024-10-07 07:34:23.721158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.883 [2024-10-07 07:34:23.721171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:15:19.883 [2024-10-07 07:34:23.721178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:15:19.883 [2024-10-07 07:34:23.721186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:15:19.883 [2024-10-07 07:34:23.721193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xea22a0 00:15:19.883 [2024-10-07 07:34:23.721212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea22a0 (9): Bad file descriptor 00:15:19.883 [2024-10-07 07:34:23.721223] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:19.883 [2024-10-07 07:34:23.721230] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:19.883 [2024-10-07 07:34:23.721238] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:19.884 [2024-10-07 07:34:23.721255] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:19.884 07:34:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.884 07:34:23 -- target/host_management.sh@87 -- # sleep 1 00:15:20.820 07:34:24 -- target/host_management.sh@91 -- # kill -9 4084505 00:15:20.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4084505) - No such process 00:15:20.820 07:34:24 -- target/host_management.sh@91 -- # true 00:15:20.820 07:34:24 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:20.820 07:34:24 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:20.820 07:34:24 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:20.820 07:34:24 -- nvmf/common.sh@520 -- # config=() 00:15:20.820 07:34:24 -- nvmf/common.sh@520 -- # local subsystem config 00:15:20.820 07:34:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:20.820 07:34:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:20.820 { 00:15:20.820 "params": { 00:15:20.820 "name": "Nvme$subsystem", 00:15:20.820 "trtype": "$TEST_TRANSPORT", 00:15:20.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:20.820 "adrfam": "ipv4", 00:15:20.820 "trsvcid": "$NVMF_PORT", 00:15:20.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:20.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:20.821 "hdgst": ${hdgst:-false}, 00:15:20.821 "ddgst": ${ddgst:-false} 00:15:20.821 }, 00:15:20.821 "method": "bdev_nvme_attach_controller" 00:15:20.821 } 00:15:20.821 EOF 00:15:20.821 )") 00:15:20.821 07:34:24 -- nvmf/common.sh@542 -- # cat 00:15:20.821 07:34:24 -- nvmf/common.sh@544 -- # jq . 00:15:20.821 07:34:24 -- nvmf/common.sh@545 -- # IFS=, 00:15:20.821 07:34:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:20.821 "params": { 00:15:20.821 "name": "Nvme0", 00:15:20.821 "trtype": "tcp", 00:15:20.821 "traddr": "10.0.0.2", 00:15:20.821 "adrfam": "ipv4", 00:15:20.821 "trsvcid": "4420", 00:15:20.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:20.821 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:20.821 "hdgst": false, 00:15:20.821 "ddgst": false 00:15:20.821 }, 00:15:20.821 "method": "bdev_nvme_attach_controller" 00:15:20.821 }' 00:15:20.821 [2024-10-07 07:34:24.777825] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:20.821 [2024-10-07 07:34:24.777873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4084752 ] 00:15:21.080 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.080 [2024-10-07 07:34:24.833140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.080 [2024-10-07 07:34:24.899051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.338 Running I/O for 1 seconds... 00:15:22.274 00:15:22.274 Latency(us) 00:15:22.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.274 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:22.274 Verification LBA range: start 0x0 length 0x400 00:15:22.274 Nvme0n1 : 1.01 3959.25 247.45 0.00 0.00 15947.42 1669.61 30458.64 00:15:22.274 =================================================================================================================== 00:15:22.274 Total : 3959.25 247.45 0.00 0.00 15947.42 1669.61 30458.64 00:15:22.533 07:34:26 -- target/host_management.sh@101 -- # stoptarget 00:15:22.534 07:34:26 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:22.534 07:34:26 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:22.534 07:34:26 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:22.534 07:34:26 -- target/host_management.sh@40 -- # nvmftestfini 00:15:22.534 07:34:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:22.534 07:34:26 -- nvmf/common.sh@116 -- # sync 00:15:22.534 07:34:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:22.534 07:34:26 -- nvmf/common.sh@119 -- # set +e 00:15:22.534 07:34:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:22.534 07:34:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:22.534 rmmod nvme_tcp 00:15:22.534 rmmod nvme_fabrics 00:15:22.534 rmmod nvme_keyring 00:15:22.534 07:34:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:22.534 07:34:26 -- nvmf/common.sh@123 -- # set -e 00:15:22.534 07:34:26 -- nvmf/common.sh@124 -- # return 0 00:15:22.534 07:34:26 -- nvmf/common.sh@477 -- # '[' -n 4084238 ']' 00:15:22.534 07:34:26 -- nvmf/common.sh@478 -- # killprocess 4084238 00:15:22.534 07:34:26 -- common/autotest_common.sh@926 -- # '[' -z 4084238 ']' 00:15:22.534 07:34:26 -- common/autotest_common.sh@930 -- # kill -0 4084238 00:15:22.534 07:34:26 -- common/autotest_common.sh@931 -- # uname 00:15:22.534 07:34:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:22.534 07:34:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4084238 00:15:22.534 07:34:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:22.534 07:34:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:22.534 07:34:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4084238' 00:15:22.534 killing process with pid 4084238 00:15:22.534 07:34:26 -- common/autotest_common.sh@945 -- # kill 4084238 00:15:22.534 07:34:26 -- common/autotest_common.sh@950 -- # wait 4084238 00:15:22.793 [2024-10-07 07:34:26.641950] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:22.793 07:34:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:22.793 07:34:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:22.793 07:34:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:22.793 07:34:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:22.793 07:34:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:22.793 07:34:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.793 07:34:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.793 07:34:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.327 07:34:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:25.327 00:15:25.327 real 0m6.949s 00:15:25.327 user 0m21.142s 00:15:25.327 sys 0m1.188s 00:15:25.327 07:34:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.327 07:34:28 -- common/autotest_common.sh@10 -- # set +x 00:15:25.327 ************************************ 00:15:25.327 END TEST nvmf_host_management 00:15:25.327 ************************************ 00:15:25.327 07:34:28 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:15:25.327 00:15:25.327 real 0m12.098s 00:15:25.327 user 0m22.491s 00:15:25.327 sys 0m4.974s 00:15:25.327 07:34:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.327 07:34:28 -- common/autotest_common.sh@10 -- # set +x 00:15:25.327 ************************************ 00:15:25.327 END TEST nvmf_host_management 00:15:25.327 ************************************ 00:15:25.327 07:34:28 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:25.327 07:34:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:25.327 07:34:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:25.327 07:34:28 -- common/autotest_common.sh@10 -- # set +x 00:15:25.327 ************************************ 00:15:25.327 START TEST nvmf_lvol 00:15:25.327 ************************************ 00:15:25.327 07:34:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:25.327 * Looking for test storage... 00:15:25.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:25.327 07:34:28 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.327 07:34:28 -- nvmf/common.sh@7 -- # uname -s 00:15:25.327 07:34:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.327 07:34:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.327 07:34:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.327 07:34:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.327 07:34:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.327 07:34:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.327 07:34:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.327 07:34:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.327 07:34:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.327 07:34:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.327 07:34:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:25.327 07:34:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:25.327 07:34:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.327 07:34:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.327 07:34:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.327 07:34:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.327 07:34:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.327 07:34:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.327 07:34:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.327 07:34:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.327 07:34:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.327 07:34:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.327 07:34:28 -- paths/export.sh@5 -- # export PATH 00:15:25.327 07:34:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.327 07:34:28 -- nvmf/common.sh@46 -- # : 0 00:15:25.327 07:34:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:25.327 07:34:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:25.327 07:34:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:25.327 07:34:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.327 07:34:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.327 07:34:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:25.327 07:34:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:25.327 07:34:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:25.327 07:34:28 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:25.327 07:34:28 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:25.327 07:34:28 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:25.327 07:34:28 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:25.327 07:34:28 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:25.328 07:34:28 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:25.328 07:34:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:25.328 07:34:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.328 07:34:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:25.328 07:34:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:25.328 07:34:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:25.328 07:34:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.328 07:34:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.328 07:34:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.328 07:34:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:25.328 07:34:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:25.328 07:34:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:25.328 07:34:28 -- common/autotest_common.sh@10 -- # set +x 00:15:30.601 07:34:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:30.601 07:34:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:30.601 07:34:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:30.601 07:34:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:30.601 07:34:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:30.601 07:34:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:30.601 07:34:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:30.601 07:34:34 -- nvmf/common.sh@294 -- # net_devs=() 00:15:30.601 07:34:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:30.601 07:34:34 -- nvmf/common.sh@295 -- # e810=() 00:15:30.601 07:34:34 -- nvmf/common.sh@295 -- # local -ga e810 00:15:30.601 07:34:34 -- nvmf/common.sh@296 -- # x722=() 00:15:30.601 07:34:34 -- nvmf/common.sh@296 -- # local -ga x722 00:15:30.601 07:34:34 -- nvmf/common.sh@297 -- # mlx=() 00:15:30.601 07:34:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:30.601 07:34:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:30.601 07:34:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:30.601 07:34:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:30.601 07:34:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:30.601 07:34:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:30.601 07:34:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:30.601 07:34:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:30.601 07:34:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:30.601 07:34:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:30.601 07:34:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:30.601 07:34:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:30.601 07:34:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:30.601 07:34:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:30.601 07:34:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:30.601 07:34:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:30.601 07:34:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:30.601 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:30.601 07:34:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:30.601 07:34:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:30.601 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:30.601 07:34:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:30.601 07:34:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:30.601 07:34:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.601 07:34:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:30.601 07:34:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.601 07:34:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:30.601 Found net devices under 0000:af:00.0: cvl_0_0 00:15:30.601 07:34:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.601 07:34:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:30.601 07:34:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.601 07:34:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:30.601 07:34:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.601 07:34:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:30.601 Found net devices under 0000:af:00.1: cvl_0_1 00:15:30.601 07:34:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.601 07:34:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:30.601 07:34:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:30.601 07:34:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:30.601 07:34:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:30.601 07:34:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.601 07:34:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.601 07:34:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:30.601 07:34:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:30.601 07:34:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:30.601 07:34:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:30.601 07:34:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:30.601 07:34:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:30.601 07:34:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.601 07:34:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:30.601 07:34:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:30.601 07:34:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:30.601 07:34:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:30.601 07:34:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:30.601 07:34:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:30.601 07:34:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:30.601 07:34:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:30.601 07:34:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:30.601 07:34:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:30.601 07:34:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:30.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:15:30.601 00:15:30.601 --- 10.0.0.2 ping statistics --- 00:15:30.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.601 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:15:30.601 07:34:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:30.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:15:30.601 00:15:30.601 --- 10.0.0.1 ping statistics --- 00:15:30.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.602 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:15:30.602 07:34:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.602 07:34:34 -- nvmf/common.sh@410 -- # return 0 00:15:30.602 07:34:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:30.602 07:34:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.602 07:34:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:30.602 07:34:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:30.602 07:34:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.602 07:34:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:30.602 07:34:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:30.602 07:34:34 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:30.602 07:34:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:30.602 07:34:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:30.602 07:34:34 -- common/autotest_common.sh@10 -- # set +x 00:15:30.602 07:34:34 -- nvmf/common.sh@469 -- # nvmfpid=4088472 00:15:30.602 07:34:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:30.602 07:34:34 -- nvmf/common.sh@470 -- # waitforlisten 4088472 00:15:30.602 07:34:34 -- common/autotest_common.sh@819 -- # '[' -z 4088472 ']' 00:15:30.602 07:34:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.602 07:34:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:30.602 07:34:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.602 07:34:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:30.602 07:34:34 -- common/autotest_common.sh@10 -- # set +x 00:15:30.602 [2024-10-07 07:34:34.377195] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:30.602 [2024-10-07 07:34:34.377238] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.602 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.602 [2024-10-07 07:34:34.434439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:30.602 [2024-10-07 07:34:34.509777] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:30.602 [2024-10-07 07:34:34.509891] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.602 [2024-10-07 07:34:34.509900] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.602 [2024-10-07 07:34:34.509907] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.602 [2024-10-07 07:34:34.509946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.602 [2024-10-07 07:34:34.510042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.602 [2024-10-07 07:34:34.510044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.537 07:34:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:31.537 07:34:35 -- common/autotest_common.sh@852 -- # return 0 00:15:31.537 07:34:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:31.537 07:34:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:31.537 07:34:35 -- common/autotest_common.sh@10 -- # set +x 00:15:31.537 07:34:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.537 07:34:35 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:31.537 [2024-10-07 07:34:35.399365] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.537 07:34:35 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:31.796 07:34:35 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:31.796 07:34:35 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:32.055 07:34:35 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:32.055 07:34:35 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:32.055 07:34:36 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:32.313 07:34:36 -- target/nvmf_lvol.sh@29 -- # lvs=b5aad16b-6c6c-4e7e-8096-7f81614e1a9e 00:15:32.313 07:34:36 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b5aad16b-6c6c-4e7e-8096-7f81614e1a9e lvol 20 00:15:32.571 07:34:36 -- target/nvmf_lvol.sh@32 -- # lvol=c5578b74-f188-46d7-bd7d-f457255c043b 00:15:32.571 07:34:36 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:32.830 07:34:36 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c5578b74-f188-46d7-bd7d-f457255c043b 00:15:32.830 07:34:36 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:33.090 [2024-10-07 07:34:36.935169] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.090 07:34:36 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:33.348 07:34:37 -- target/nvmf_lvol.sh@42 -- # perf_pid=4088966 00:15:33.348 07:34:37 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:33.348 07:34:37 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:33.348 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.284 07:34:38 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c5578b74-f188-46d7-bd7d-f457255c043b MY_SNAPSHOT 00:15:34.544 07:34:38 -- target/nvmf_lvol.sh@47 -- # snapshot=a711473b-c2bb-4162-86fc-92f9f25f3c37 00:15:34.544 07:34:38 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c5578b74-f188-46d7-bd7d-f457255c043b 30 00:15:34.803 07:34:38 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a711473b-c2bb-4162-86fc-92f9f25f3c37 MY_CLONE 00:15:35.062 07:34:38 -- target/nvmf_lvol.sh@49 -- # clone=265545a7-e2c0-40d6-ac6c-4e6ebb6471cb 00:15:35.062 07:34:38 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 265545a7-e2c0-40d6-ac6c-4e6ebb6471cb 00:15:35.321 07:34:39 -- target/nvmf_lvol.sh@53 -- # wait 4088966 00:15:45.304 Initializing NVMe Controllers 00:15:45.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:45.304 Controller IO queue size 128, less than required. 00:15:45.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:45.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:45.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:45.304 Initialization complete. Launching workers. 00:15:45.304 ======================================================== 00:15:45.304 Latency(us) 00:15:45.304 Device Information : IOPS MiB/s Average min max 00:15:45.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12418.60 48.51 10309.94 1822.44 56861.88 00:15:45.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12618.80 49.29 10146.68 2379.84 61305.77 00:15:45.304 ======================================================== 00:15:45.304 Total : 25037.39 97.80 10227.66 1822.44 61305.77 00:15:45.304 00:15:45.304 07:34:47 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:45.304 07:34:47 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c5578b74-f188-46d7-bd7d-f457255c043b 00:15:45.304 07:34:47 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5aad16b-6c6c-4e7e-8096-7f81614e1a9e 00:15:45.304 07:34:48 -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:45.304 07:34:48 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:45.304 07:34:48 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:45.304 07:34:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:45.304 07:34:48 -- nvmf/common.sh@116 -- # sync 00:15:45.304 07:34:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:45.304 07:34:48 -- nvmf/common.sh@119 -- # set +e 00:15:45.304 07:34:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:45.304 07:34:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:45.304 rmmod nvme_tcp 00:15:45.304 rmmod nvme_fabrics 00:15:45.304 rmmod nvme_keyring 00:15:45.304 07:34:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:45.304 07:34:48 -- nvmf/common.sh@123 -- # set -e 00:15:45.304 07:34:48 -- nvmf/common.sh@124 -- # return 0 00:15:45.304 07:34:48 -- nvmf/common.sh@477 -- # '[' -n 4088472 ']' 00:15:45.304 07:34:48 -- nvmf/common.sh@478 -- # killprocess 4088472 00:15:45.304 07:34:48 -- common/autotest_common.sh@926 -- # '[' -z 4088472 ']' 00:15:45.304 07:34:48 -- common/autotest_common.sh@930 -- # kill -0 4088472 00:15:45.304 07:34:48 -- common/autotest_common.sh@931 -- # uname 00:15:45.304 07:34:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:45.304 07:34:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4088472 00:15:45.304 07:34:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:45.304 07:34:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:45.304 07:34:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4088472' 00:15:45.304 killing process with pid 4088472 00:15:45.304 07:34:48 -- common/autotest_common.sh@945 -- # kill 4088472 00:15:45.304 07:34:48 -- common/autotest_common.sh@950 -- # wait 4088472 00:15:45.304 07:34:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:45.304 07:34:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:45.304 07:34:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:45.304 07:34:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:45.304 07:34:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:45.304 07:34:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.304 07:34:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.304 07:34:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.681 07:34:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:46.681 00:15:46.681 real 0m21.679s 00:15:46.681 user 1m4.088s 00:15:46.681 sys 0m7.048s 00:15:46.681 07:34:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.681 07:34:50 -- common/autotest_common.sh@10 -- # set +x 00:15:46.681 ************************************ 00:15:46.681 END TEST nvmf_lvol 00:15:46.681 ************************************ 00:15:46.681 07:34:50 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:46.681 07:34:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:46.681 07:34:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:46.681 07:34:50 -- common/autotest_common.sh@10 -- # set +x 00:15:46.681 ************************************ 00:15:46.681 START TEST nvmf_lvs_grow 00:15:46.681 ************************************ 00:15:46.681 07:34:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:46.681 * Looking for test storage... 00:15:46.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.681 07:34:50 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.681 07:34:50 -- nvmf/common.sh@7 -- # uname -s 00:15:46.681 07:34:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.681 07:34:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.681 07:34:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.681 07:34:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.681 07:34:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.681 07:34:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.681 07:34:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.681 07:34:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.681 07:34:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.681 07:34:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.681 07:34:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:15:46.681 07:34:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:15:46.681 07:34:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.681 07:34:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.681 07:34:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:46.681 07:34:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:46.681 07:34:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.681 07:34:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.681 07:34:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.681 07:34:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.681 07:34:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.681 07:34:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.681 07:34:50 -- paths/export.sh@5 -- # export PATH 00:15:46.681 07:34:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.681 07:34:50 -- nvmf/common.sh@46 -- # : 0 00:15:46.681 07:34:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:46.681 07:34:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:46.681 07:34:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:46.681 07:34:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.681 07:34:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.681 07:34:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:46.681 07:34:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:46.681 07:34:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:46.681 07:34:50 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:46.681 07:34:50 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:46.681 07:34:50 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:15:46.681 07:34:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:46.681 07:34:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.681 07:34:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:46.681 07:34:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:46.681 07:34:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:46.681 07:34:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.681 07:34:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.681 07:34:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.681 07:34:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:46.681 07:34:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:46.681 07:34:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:46.681 07:34:50 -- common/autotest_common.sh@10 -- # set +x 00:15:51.944 07:34:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:51.944 07:34:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:51.944 07:34:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:51.944 07:34:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:51.944 07:34:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:51.944 07:34:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:51.944 07:34:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:51.944 07:34:55 -- nvmf/common.sh@294 -- # net_devs=() 00:15:51.944 07:34:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:51.944 07:34:55 -- nvmf/common.sh@295 -- # e810=() 00:15:51.944 07:34:55 -- nvmf/common.sh@295 -- # local -ga e810 00:15:51.944 07:34:55 -- nvmf/common.sh@296 -- # x722=() 00:15:51.944 07:34:55 -- nvmf/common.sh@296 -- # local -ga x722 00:15:51.944 07:34:55 -- nvmf/common.sh@297 -- # mlx=() 00:15:51.944 07:34:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:51.944 07:34:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.944 07:34:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.944 07:34:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.944 07:34:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.944 07:34:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.944 07:34:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.944 07:34:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.944 07:34:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.944 07:34:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.944 07:34:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.944 07:34:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.944 07:34:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:51.944 07:34:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:51.944 07:34:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:51.944 07:34:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:51.944 07:34:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:51.944 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:51.944 07:34:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:51.944 07:34:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:51.944 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:51.944 07:34:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:51.944 07:34:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:51.944 07:34:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.944 07:34:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:51.944 07:34:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.944 07:34:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:51.944 Found net devices under 0000:af:00.0: cvl_0_0 00:15:51.944 07:34:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.944 07:34:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:51.944 07:34:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.944 07:34:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:51.944 07:34:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.944 07:34:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:51.944 Found net devices under 0000:af:00.1: cvl_0_1 00:15:51.944 07:34:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.944 07:34:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:51.944 07:34:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:51.944 07:34:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:51.944 07:34:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:51.944 07:34:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.944 07:34:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.944 07:34:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.944 07:34:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:51.944 07:34:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.944 07:34:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.944 07:34:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:51.944 07:34:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.944 07:34:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.944 07:34:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:51.944 07:34:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:51.944 07:34:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.944 07:34:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:52.203 07:34:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:52.203 07:34:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:52.203 07:34:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:52.203 07:34:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:52.203 07:34:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:52.203 07:34:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:52.203 07:34:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:52.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:15:52.203 00:15:52.203 --- 10.0.0.2 ping statistics --- 00:15:52.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.203 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:15:52.204 07:34:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:52.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:15:52.204 00:15:52.204 --- 10.0.0.1 ping statistics --- 00:15:52.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.204 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:15:52.204 07:34:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.204 07:34:56 -- nvmf/common.sh@410 -- # return 0 00:15:52.204 07:34:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:52.204 07:34:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.204 07:34:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:52.204 07:34:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:52.204 07:34:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.204 07:34:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:52.204 07:34:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:52.204 07:34:56 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:15:52.204 07:34:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:52.204 07:34:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:52.204 07:34:56 -- common/autotest_common.sh@10 -- # set +x 00:15:52.204 07:34:56 -- nvmf/common.sh@469 -- # nvmfpid=4094250 00:15:52.204 07:34:56 -- nvmf/common.sh@470 -- # waitforlisten 4094250 00:15:52.204 07:34:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:52.204 07:34:56 -- common/autotest_common.sh@819 -- # '[' -z 4094250 ']' 00:15:52.204 07:34:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.204 07:34:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:52.204 07:34:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.204 07:34:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:52.204 07:34:56 -- common/autotest_common.sh@10 -- # set +x 00:15:52.204 [2024-10-07 07:34:56.134799] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:52.204 [2024-10-07 07:34:56.134844] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.204 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.463 [2024-10-07 07:34:56.192468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.463 [2024-10-07 07:34:56.268100] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:52.463 [2024-10-07 07:34:56.268205] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.463 [2024-10-07 07:34:56.268214] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.463 [2024-10-07 07:34:56.268220] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.463 [2024-10-07 07:34:56.268242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.030 07:34:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:53.030 07:34:56 -- common/autotest_common.sh@852 -- # return 0 00:15:53.030 07:34:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:53.030 07:34:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:53.030 07:34:56 -- common/autotest_common.sh@10 -- # set +x 00:15:53.030 07:34:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.030 07:34:56 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:53.288 [2024-10-07 07:34:57.134470] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.288 07:34:57 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:15:53.288 07:34:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:53.288 07:34:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:53.288 07:34:57 -- common/autotest_common.sh@10 -- # set +x 00:15:53.288 ************************************ 00:15:53.288 START TEST lvs_grow_clean 00:15:53.288 ************************************ 00:15:53.288 07:34:57 -- common/autotest_common.sh@1104 -- # lvs_grow 00:15:53.288 07:34:57 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:53.288 07:34:57 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:53.288 07:34:57 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:53.288 07:34:57 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:53.288 07:34:57 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:53.288 07:34:57 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:53.288 07:34:57 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:53.288 07:34:57 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:53.288 07:34:57 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:53.546 07:34:57 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:53.546 07:34:57 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:53.806 07:34:57 -- target/nvmf_lvs_grow.sh@28 -- # lvs=ac1986d3-ae3d-4328-841f-7fa04cb4627e 00:15:53.806 07:34:57 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac1986d3-ae3d-4328-841f-7fa04cb4627e 00:15:53.806 07:34:57 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:53.806 07:34:57 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:53.806 07:34:57 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:53.806 07:34:57 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ac1986d3-ae3d-4328-841f-7fa04cb4627e lvol 150 00:15:54.065 07:34:57 -- target/nvmf_lvs_grow.sh@33 -- # lvol=32bb4451-3d96-481a-802c-4488a9367b87 00:15:54.065 07:34:57 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:54.065 07:34:57 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:54.324 [2024-10-07 07:34:58.070841] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:54.324 [2024-10-07 07:34:58.070887] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:54.324 true 00:15:54.324 07:34:58 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac1986d3-ae3d-4328-841f-7fa04cb4627e 00:15:54.324 07:34:58 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:54.324 07:34:58 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:54.324 07:34:58 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:54.582 07:34:58 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 32bb4451-3d96-481a-802c-4488a9367b87 00:15:54.841 07:34:58 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:54.841 [2024-10-07 07:34:58.764956] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.841 07:34:58 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:55.102 07:34:58 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4094750 00:15:55.102 07:34:58 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:55.102 07:34:58 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:55.102 07:34:58 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4094750 /var/tmp/bdevperf.sock 00:15:55.102 07:34:58 -- common/autotest_common.sh@819 -- # '[' -z 4094750 ']' 00:15:55.102 07:34:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:55.102 07:34:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:55.102 07:34:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:55.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:55.102 07:34:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:55.102 07:34:58 -- common/autotest_common.sh@10 -- # set +x 00:15:55.102 [2024-10-07 07:34:59.000145] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:55.102 [2024-10-07 07:34:59.000190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4094750 ] 00:15:55.102 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.102 [2024-10-07 07:34:59.053454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.360 [2024-10-07 07:34:59.128921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.928 07:34:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:55.928 07:34:59 -- common/autotest_common.sh@852 -- # return 0 00:15:55.928 07:34:59 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:56.495 Nvme0n1 00:15:56.495 07:35:00 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:56.495 [ 00:15:56.495 { 00:15:56.495 "name": "Nvme0n1", 00:15:56.495 "aliases": [ 00:15:56.495 "32bb4451-3d96-481a-802c-4488a9367b87" 00:15:56.495 ], 00:15:56.495 "product_name": "NVMe disk", 00:15:56.495 "block_size": 4096, 00:15:56.495 "num_blocks": 38912, 00:15:56.495 "uuid": "32bb4451-3d96-481a-802c-4488a9367b87", 00:15:56.496 "assigned_rate_limits": { 00:15:56.496 "rw_ios_per_sec": 0, 00:15:56.496 "rw_mbytes_per_sec": 0, 00:15:56.496 "r_mbytes_per_sec": 0, 00:15:56.496 "w_mbytes_per_sec": 0 00:15:56.496 }, 00:15:56.496 "claimed": false, 00:15:56.496 "zoned": false, 00:15:56.496 "supported_io_types": { 00:15:56.496 "read": true, 00:15:56.496 "write": true, 00:15:56.496 "unmap": true, 00:15:56.496 "write_zeroes": true, 00:15:56.496 "flush": true, 00:15:56.496 "reset": true, 00:15:56.496 "compare": true, 00:15:56.496 "compare_and_write": true, 00:15:56.496 "abort": true, 00:15:56.496 "nvme_admin": true, 00:15:56.496 "nvme_io": true 00:15:56.496 }, 00:15:56.496 "driver_specific": { 00:15:56.496 "nvme": [ 00:15:56.496 { 00:15:56.496 "trid": { 00:15:56.496 "trtype": "TCP", 00:15:56.496 "adrfam": "IPv4", 00:15:56.496 "traddr": "10.0.0.2", 00:15:56.496 "trsvcid": "4420", 00:15:56.496 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:56.496 }, 00:15:56.496 "ctrlr_data": { 00:15:56.496 "cntlid": 1, 00:15:56.496 "vendor_id": "0x8086", 00:15:56.496 "model_number": "SPDK bdev Controller", 00:15:56.496 "serial_number": "SPDK0", 00:15:56.496 "firmware_revision": "24.01.1", 00:15:56.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:56.496 "oacs": { 00:15:56.496 "security": 0, 00:15:56.496 "format": 0, 00:15:56.496 "firmware": 0, 00:15:56.496 "ns_manage": 0 00:15:56.496 }, 00:15:56.496 "multi_ctrlr": true, 00:15:56.496 "ana_reporting": false 00:15:56.496 }, 00:15:56.496 "vs": { 00:15:56.496 "nvme_version": "1.3" 00:15:56.496 }, 00:15:56.496 "ns_data": { 00:15:56.496 "id": 1, 00:15:56.496 "can_share": true 00:15:56.496 } 00:15:56.496 } 00:15:56.496 ], 00:15:56.496 "mp_policy": "active_passive" 00:15:56.496 } 00:15:56.496 } 00:15:56.496 ] 00:15:56.496 07:35:00 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4094980 00:15:56.496 07:35:00 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:56.496 07:35:00 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:56.496 Running I/O for 10 seconds... 00:15:57.874 Latency(us) 00:15:57.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:57.874 Nvme0n1 : 1.00 23030.00 89.96 0.00 0.00 0.00 0.00 0.00 00:15:57.874 =================================================================================================================== 00:15:57.874 Total : 23030.00 89.96 0.00 0.00 0.00 0.00 0.00 00:15:57.874 00:15:58.442 07:35:02 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ac1986d3-ae3d-4328-841f-7fa04cb4627e 00:15:58.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:58.700 Nvme0n1 : 2.00 23291.00 90.98 0.00 0.00 0.00 0.00 0.00 00:15:58.700 =================================================================================================================== 00:15:58.700 Total : 23291.00 90.98 0.00 0.00 0.00 0.00 0.00 00:15:58.700 00:15:58.700 true 00:15:58.700 07:35:02 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac1986d3-ae3d-4328-841f-7fa04cb4627e 00:15:58.700 07:35:02 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:58.960 07:35:02 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:58.960 07:35:02 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:58.960 07:35:02 -- target/nvmf_lvs_grow.sh@65 -- # wait 4094980 00:15:59.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:59.527 Nvme0n1 : 3.00 23380.67 91.33 0.00 0.00 0.00 0.00 0.00 00:15:59.527 =================================================================================================================== 00:15:59.527 Total : 23380.67 91.33 0.00 0.00 0.00 0.00 0.00 00:15:59.527 00:16:00.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:00.903 Nvme0n1 : 4.00 23355.50 91.23 0.00 0.00 0.00 0.00 0.00 00:16:00.903 =================================================================================================================== 00:16:00.903 Total : 23355.50 91.23 0.00 0.00 0.00 0.00 0.00 00:16:00.903 00:16:01.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:01.839 Nvme0n1 : 5.00 23433.20 91.54 0.00 0.00 0.00 0.00 0.00 00:16:01.839 =================================================================================================================== 00:16:01.839 Total : 23433.20 91.54 0.00 0.00 0.00 0.00 0.00 00:16:01.839 00:16:02.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:02.775 Nvme0n1 : 6.00 23501.00 91.80 0.00 0.00 0.00 0.00 0.00 00:16:02.775 =================================================================================================================== 00:16:02.775 Total : 23501.00 91.80 0.00 0.00 0.00 0.00 0.00 00:16:02.775 00:16:03.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:03.710 Nvme0n1 : 7.00 23543.71 91.97 0.00 0.00 0.00 0.00 0.00 00:16:03.710 =================================================================================================================== 00:16:03.710 Total : 23543.71 91.97 0.00 0.00 0.00 0.00 0.00 00:16:03.710 00:16:04.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:04.645 Nvme0n1 : 8.00 23583.75 92.12 0.00 0.00 0.00 0.00 0.00 00:16:04.645 =================================================================================================================== 00:16:04.645 Total : 23583.75 92.12 0.00 0.00 0.00 0.00 0.00 00:16:04.645 00:16:05.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:05.578 Nvme0n1 : 9.00 23614.89 92.25 0.00 0.00 0.00 0.00 0.00 00:16:05.578 =================================================================================================================== 00:16:05.578 Total : 23614.89 92.25 0.00 0.00 0.00 0.00 0.00 00:16:05.578 00:16:06.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:06.585 Nvme0n1 : 10.00 23595.80 92.17 0.00 0.00 0.00 0.00 0.00 00:16:06.585 =================================================================================================================== 00:16:06.585 Total : 23595.80 92.17 0.00 0.00 0.00 0.00 0.00 00:16:06.585 00:16:06.585 00:16:06.585 Latency(us) 00:16:06.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:06.585 Nvme0n1 : 10.01 23595.63 92.17 0.00 0.00 5420.90 4119.41 14293.09 00:16:06.585 =================================================================================================================== 00:16:06.585 Total : 23595.63 92.17 0.00 0.00 5420.90 4119.41 14293.09 00:16:06.585 0 00:16:06.585 07:35:10 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4094750 00:16:06.585 07:35:10 -- common/autotest_common.sh@926 -- # '[' -z 4094750 ']' 00:16:06.585 07:35:10 -- common/autotest_common.sh@930 -- # kill -0 4094750 00:16:06.585 07:35:10 -- common/autotest_common.sh@931 -- # uname 00:16:06.890 07:35:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:06.890 07:35:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4094750 00:16:06.890 07:35:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:06.890 07:35:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:06.890 07:35:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4094750' 00:16:06.890 killing process with pid 4094750 00:16:06.890 07:35:10 -- common/autotest_common.sh@945 -- # kill 4094750 00:16:06.890 Received shutdown signal, test time was about 10.000000 seconds 00:16:06.890 00:16:06.890 Latency(us) 00:16:06.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.890 =================================================================================================================== 00:16:06.890 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:06.890 07:35:10 -- common/autotest_common.sh@950 -- # wait 4094750 00:16:06.890 07:35:10 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:07.154 07:35:10 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac1986d3-ae3d-4328-841f-7fa04cb4627e 00:16:07.154 07:35:10 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:07.412 07:35:11 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:07.412 07:35:11 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:16:07.412 07:35:11 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:07.412 [2024-10-07 07:35:11.288627] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:07.412 07:35:11 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac1986d3-ae3d-4328-841f-7fa04cb4627e 00:16:07.412 07:35:11 -- common/autotest_common.sh@640 -- # local es=0 00:16:07.412 07:35:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac1986d3-ae3d-4328-841f-7fa04cb4627e 00:16:07.412 07:35:11 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.412 07:35:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:07.412 07:35:11 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.412 07:35:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:07.412 07:35:11 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.412 07:35:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:07.412 07:35:11 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.412 07:35:11 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:07.412 07:35:11 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac1986d3-ae3d-4328-841f-7fa04cb4627e 00:16:07.671 request: 00:16:07.671 { 00:16:07.671 "uuid": "ac1986d3-ae3d-4328-841f-7fa04cb4627e", 00:16:07.671 "method": "bdev_lvol_get_lvstores", 00:16:07.671 "req_id": 1 00:16:07.671 } 00:16:07.671 Got JSON-RPC error response 00:16:07.671 response: 00:16:07.671 { 00:16:07.671 "code": -19, 00:16:07.671 "message": "No such device" 00:16:07.671 } 00:16:07.671 07:35:11 -- common/autotest_common.sh@643 -- # es=1 00:16:07.671 07:35:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:07.671 07:35:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:07.671 07:35:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:07.671 07:35:11 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:07.929 aio_bdev 00:16:07.929 07:35:11 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 32bb4451-3d96-481a-802c-4488a9367b87 00:16:07.929 07:35:11 -- common/autotest_common.sh@887 -- # local bdev_name=32bb4451-3d96-481a-802c-4488a9367b87 00:16:07.929 07:35:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:07.929 07:35:11 -- common/autotest_common.sh@889 -- # local i 00:16:07.929 07:35:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:07.929 07:35:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:07.929 07:35:11 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:07.929 07:35:11 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 32bb4451-3d96-481a-802c-4488a9367b87 -t 2000 00:16:08.188 [ 00:16:08.188 { 00:16:08.188 "name": "32bb4451-3d96-481a-802c-4488a9367b87", 00:16:08.188 "aliases": [ 00:16:08.188 "lvs/lvol" 00:16:08.188 ], 00:16:08.188 "product_name": "Logical Volume", 00:16:08.188 "block_size": 4096, 00:16:08.188 "num_blocks": 38912, 00:16:08.188 "uuid": "32bb4451-3d96-481a-802c-4488a9367b87", 00:16:08.188 "assigned_rate_limits": { 00:16:08.188 "rw_ios_per_sec": 0, 00:16:08.188 "rw_mbytes_per_sec": 0, 00:16:08.188 "r_mbytes_per_sec": 0, 00:16:08.188 "w_mbytes_per_sec": 0 00:16:08.188 }, 00:16:08.188 "claimed": false, 00:16:08.188 "zoned": false, 00:16:08.188 "supported_io_types": { 00:16:08.188 "read": true, 00:16:08.188 "write": true, 00:16:08.188 "unmap": true, 00:16:08.188 "write_zeroes": true, 00:16:08.188 "flush": false, 00:16:08.188 "reset": true, 00:16:08.188 "compare": false, 00:16:08.188 "compare_and_write": false, 00:16:08.188 "abort": false, 00:16:08.188 "nvme_admin": false, 00:16:08.188 "nvme_io": false 00:16:08.188 }, 00:16:08.188 "driver_specific": { 00:16:08.188 "lvol": { 00:16:08.188 "lvol_store_uuid": "ac1986d3-ae3d-4328-841f-7fa04cb4627e", 00:16:08.188 "base_bdev": "aio_bdev", 00:16:08.188 "thin_provision": false, 00:16:08.188 "snapshot": false, 00:16:08.188 "clone": false, 00:16:08.188 "esnap_clone": false 00:16:08.188 } 00:16:08.188 } 00:16:08.188 } 00:16:08.188 ] 00:16:08.188 07:35:12 -- common/autotest_common.sh@895 -- # return 0 00:16:08.188 07:35:12 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac1986d3-ae3d-4328-841f-7fa04cb4627e 00:16:08.188 07:35:12 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:08.447 07:35:12 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:08.447 07:35:12 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac1986d3-ae3d-4328-841f-7fa04cb4627e 00:16:08.447 07:35:12 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:08.447 07:35:12 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:08.447 07:35:12 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 32bb4451-3d96-481a-802c-4488a9367b87 00:16:08.706 07:35:12 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ac1986d3-ae3d-4328-841f-7fa04cb4627e 00:16:08.965 07:35:12 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:08.965 07:35:12 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:08.965 00:16:08.965 real 0m15.771s 00:16:08.965 user 0m15.426s 00:16:08.965 sys 0m1.419s 00:16:08.965 07:35:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.965 07:35:12 -- common/autotest_common.sh@10 -- # set +x 00:16:08.965 ************************************ 00:16:08.965 END TEST lvs_grow_clean 00:16:08.965 ************************************ 00:16:09.224 07:35:12 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:09.224 07:35:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:09.224 07:35:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:09.224 07:35:12 -- common/autotest_common.sh@10 -- # set +x 00:16:09.224 ************************************ 00:16:09.224 START TEST lvs_grow_dirty 00:16:09.224 ************************************ 00:16:09.224 07:35:12 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:16:09.224 07:35:12 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:09.224 07:35:12 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:09.224 07:35:12 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:09.224 07:35:12 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:09.224 07:35:12 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:09.224 07:35:12 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:09.224 07:35:12 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:09.224 07:35:12 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:09.224 07:35:12 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:09.224 07:35:13 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:09.224 07:35:13 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:09.483 07:35:13 -- target/nvmf_lvs_grow.sh@28 -- # lvs=896ff595-14da-4630-8dff-953ac762284d 00:16:09.483 07:35:13 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 896ff595-14da-4630-8dff-953ac762284d 00:16:09.483 07:35:13 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:09.742 07:35:13 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:09.742 07:35:13 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:09.742 07:35:13 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 896ff595-14da-4630-8dff-953ac762284d lvol 150 00:16:09.742 07:35:13 -- target/nvmf_lvs_grow.sh@33 -- # lvol=34c21547-50cd-4f2e-9a59-ec952b1f2fa0 00:16:09.742 07:35:13 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:09.742 07:35:13 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:10.000 [2024-10-07 07:35:13.872548] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:10.000 [2024-10-07 07:35:13.872594] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:10.000 true 00:16:10.000 07:35:13 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 896ff595-14da-4630-8dff-953ac762284d 00:16:10.000 07:35:13 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:10.259 07:35:14 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:10.259 07:35:14 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:10.518 07:35:14 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 34c21547-50cd-4f2e-9a59-ec952b1f2fa0 00:16:10.518 07:35:14 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:10.777 07:35:14 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:11.035 07:35:14 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4097322 00:16:11.035 07:35:14 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:11.035 07:35:14 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4097322 /var/tmp/bdevperf.sock 00:16:11.035 07:35:14 -- common/autotest_common.sh@819 -- # '[' -z 4097322 ']' 00:16:11.035 07:35:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:11.035 07:35:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:11.035 07:35:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:11.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:11.035 07:35:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:11.035 07:35:14 -- common/autotest_common.sh@10 -- # set +x 00:16:11.035 07:35:14 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:11.035 [2024-10-07 07:35:14.798992] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:11.035 [2024-10-07 07:35:14.799043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4097322 ] 00:16:11.035 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.035 [2024-10-07 07:35:14.854543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.035 [2024-10-07 07:35:14.925738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.969 07:35:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:11.969 07:35:15 -- common/autotest_common.sh@852 -- # return 0 00:16:11.969 07:35:15 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:11.969 Nvme0n1 00:16:11.969 07:35:15 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:12.227 [ 00:16:12.227 { 00:16:12.227 "name": "Nvme0n1", 00:16:12.227 "aliases": [ 00:16:12.227 "34c21547-50cd-4f2e-9a59-ec952b1f2fa0" 00:16:12.227 ], 00:16:12.227 "product_name": "NVMe disk", 00:16:12.227 "block_size": 4096, 00:16:12.227 "num_blocks": 38912, 00:16:12.227 "uuid": "34c21547-50cd-4f2e-9a59-ec952b1f2fa0", 00:16:12.227 "assigned_rate_limits": { 00:16:12.227 "rw_ios_per_sec": 0, 00:16:12.227 "rw_mbytes_per_sec": 0, 00:16:12.227 "r_mbytes_per_sec": 0, 00:16:12.227 "w_mbytes_per_sec": 0 00:16:12.227 }, 00:16:12.227 "claimed": false, 00:16:12.227 "zoned": false, 00:16:12.227 "supported_io_types": { 00:16:12.227 "read": true, 00:16:12.227 "write": true, 00:16:12.227 "unmap": true, 00:16:12.227 "write_zeroes": true, 00:16:12.227 "flush": true, 00:16:12.227 "reset": true, 00:16:12.227 "compare": true, 00:16:12.227 "compare_and_write": true, 00:16:12.227 "abort": true, 00:16:12.227 "nvme_admin": true, 00:16:12.227 "nvme_io": true 00:16:12.227 }, 00:16:12.227 "driver_specific": { 00:16:12.227 "nvme": [ 00:16:12.227 { 00:16:12.227 "trid": { 00:16:12.227 "trtype": "TCP", 00:16:12.227 "adrfam": "IPv4", 00:16:12.227 "traddr": "10.0.0.2", 00:16:12.227 "trsvcid": "4420", 00:16:12.227 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:12.227 }, 00:16:12.227 "ctrlr_data": { 00:16:12.227 "cntlid": 1, 00:16:12.227 "vendor_id": "0x8086", 00:16:12.227 "model_number": "SPDK bdev Controller", 00:16:12.227 "serial_number": "SPDK0", 00:16:12.227 "firmware_revision": "24.01.1", 00:16:12.227 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:12.227 "oacs": { 00:16:12.227 "security": 0, 00:16:12.227 "format": 0, 00:16:12.227 "firmware": 0, 00:16:12.227 "ns_manage": 0 00:16:12.227 }, 00:16:12.227 "multi_ctrlr": true, 00:16:12.227 "ana_reporting": false 00:16:12.227 }, 00:16:12.227 "vs": { 00:16:12.227 "nvme_version": "1.3" 00:16:12.227 }, 00:16:12.227 "ns_data": { 00:16:12.227 "id": 1, 00:16:12.227 "can_share": true 00:16:12.227 } 00:16:12.227 } 00:16:12.227 ], 00:16:12.227 "mp_policy": "active_passive" 00:16:12.227 } 00:16:12.227 } 00:16:12.227 ] 00:16:12.227 07:35:16 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4097558 00:16:12.227 07:35:16 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:12.227 07:35:16 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:12.227 Running I/O for 10 seconds... 00:16:13.162 Latency(us) 00:16:13.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:13.162 Nvme0n1 : 1.00 24014.00 93.80 0.00 0.00 0.00 0.00 0.00 00:16:13.162 =================================================================================================================== 00:16:13.162 Total : 24014.00 93.80 0.00 0.00 0.00 0.00 0.00 00:16:13.162 00:16:14.098 07:35:18 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 896ff595-14da-4630-8dff-953ac762284d 00:16:14.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:14.357 Nvme0n1 : 2.00 24199.50 94.53 0.00 0.00 0.00 0.00 0.00 00:16:14.357 =================================================================================================================== 00:16:14.357 Total : 24199.50 94.53 0.00 0.00 0.00 0.00 0.00 00:16:14.357 00:16:14.357 true 00:16:14.357 07:35:18 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 896ff595-14da-4630-8dff-953ac762284d 00:16:14.357 07:35:18 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:14.615 07:35:18 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:14.615 07:35:18 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:14.615 07:35:18 -- target/nvmf_lvs_grow.sh@65 -- # wait 4097558 00:16:15.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:15.182 Nvme0n1 : 3.00 24160.00 94.38 0.00 0.00 0.00 0.00 0.00 00:16:15.182 =================================================================================================================== 00:16:15.182 Total : 24160.00 94.38 0.00 0.00 0.00 0.00 0.00 00:16:15.182 00:16:16.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:16.559 Nvme0n1 : 4.00 24211.50 94.58 0.00 0.00 0.00 0.00 0.00 00:16:16.559 =================================================================================================================== 00:16:16.559 Total : 24211.50 94.58 0.00 0.00 0.00 0.00 0.00 00:16:16.559 00:16:17.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:17.492 Nvme0n1 : 5.00 24207.60 94.56 0.00 0.00 0.00 0.00 0.00 00:16:17.492 =================================================================================================================== 00:16:17.492 Total : 24207.60 94.56 0.00 0.00 0.00 0.00 0.00 00:16:17.492 00:16:18.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:18.429 Nvme0n1 : 6.00 24279.50 94.84 0.00 0.00 0.00 0.00 0.00 00:16:18.429 =================================================================================================================== 00:16:18.429 Total : 24279.50 94.84 0.00 0.00 0.00 0.00 0.00 00:16:18.429 00:16:19.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:19.365 Nvme0n1 : 7.00 24331.14 95.04 0.00 0.00 0.00 0.00 0.00 00:16:19.365 =================================================================================================================== 00:16:19.365 Total : 24331.14 95.04 0.00 0.00 0.00 0.00 0.00 00:16:19.365 00:16:20.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:20.303 Nvme0n1 : 8.00 24377.75 95.23 0.00 0.00 0.00 0.00 0.00 00:16:20.303 =================================================================================================================== 00:16:20.303 Total : 24377.75 95.23 0.00 0.00 0.00 0.00 0.00 00:16:20.303 00:16:21.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:21.240 Nvme0n1 : 9.00 24406.78 95.34 0.00 0.00 0.00 0.00 0.00 00:16:21.240 =================================================================================================================== 00:16:21.240 Total : 24406.78 95.34 0.00 0.00 0.00 0.00 0.00 00:16:21.240 00:16:22.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:22.176 Nvme0n1 : 10.00 24436.50 95.46 0.00 0.00 0.00 0.00 0.00 00:16:22.176 =================================================================================================================== 00:16:22.176 Total : 24436.50 95.46 0.00 0.00 0.00 0.00 0.00 00:16:22.176 00:16:22.176 00:16:22.176 Latency(us) 00:16:22.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:22.176 Nvme0n1 : 10.01 24436.17 95.45 0.00 0.00 5234.89 3276.80 15166.90 00:16:22.176 =================================================================================================================== 00:16:22.176 Total : 24436.17 95.45 0.00 0.00 5234.89 3276.80 15166.90 00:16:22.176 0 00:16:22.435 07:35:26 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4097322 00:16:22.435 07:35:26 -- common/autotest_common.sh@926 -- # '[' -z 4097322 ']' 00:16:22.435 07:35:26 -- common/autotest_common.sh@930 -- # kill -0 4097322 00:16:22.435 07:35:26 -- common/autotest_common.sh@931 -- # uname 00:16:22.435 07:35:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:22.435 07:35:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4097322 00:16:22.435 07:35:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:22.435 07:35:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:22.435 07:35:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4097322' 00:16:22.435 killing process with pid 4097322 00:16:22.435 07:35:26 -- common/autotest_common.sh@945 -- # kill 4097322 00:16:22.436 Received shutdown signal, test time was about 10.000000 seconds 00:16:22.436 00:16:22.436 Latency(us) 00:16:22.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.436 =================================================================================================================== 00:16:22.436 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:22.436 07:35:26 -- common/autotest_common.sh@950 -- # wait 4097322 00:16:22.695 07:35:26 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:22.695 07:35:26 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 896ff595-14da-4630-8dff-953ac762284d 00:16:22.695 07:35:26 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:22.954 07:35:26 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:22.954 07:35:26 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:16:22.954 07:35:26 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 4094250 00:16:22.954 07:35:26 -- target/nvmf_lvs_grow.sh@74 -- # wait 4094250 00:16:22.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 4094250 Killed "${NVMF_APP[@]}" "$@" 00:16:22.954 07:35:26 -- target/nvmf_lvs_grow.sh@74 -- # true 00:16:22.954 07:35:26 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:16:22.954 07:35:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:22.954 07:35:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:22.954 07:35:26 -- common/autotest_common.sh@10 -- # set +x 00:16:22.954 07:35:26 -- nvmf/common.sh@469 -- # nvmfpid=4099374 00:16:22.954 07:35:26 -- nvmf/common.sh@470 -- # waitforlisten 4099374 00:16:22.954 07:35:26 -- common/autotest_common.sh@819 -- # '[' -z 4099374 ']' 00:16:22.954 07:35:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.954 07:35:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:22.954 07:35:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.954 07:35:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:22.954 07:35:26 -- common/autotest_common.sh@10 -- # set +x 00:16:22.954 07:35:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:22.954 [2024-10-07 07:35:26.850946] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:22.954 [2024-10-07 07:35:26.850989] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.954 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.954 [2024-10-07 07:35:26.908329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.213 [2024-10-07 07:35:26.983860] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:23.213 [2024-10-07 07:35:26.983961] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.213 [2024-10-07 07:35:26.983969] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.213 [2024-10-07 07:35:26.983975] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.213 [2024-10-07 07:35:26.983993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.780 07:35:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:23.780 07:35:27 -- common/autotest_common.sh@852 -- # return 0 00:16:23.780 07:35:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:23.780 07:35:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:23.780 07:35:27 -- common/autotest_common.sh@10 -- # set +x 00:16:23.780 07:35:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.780 07:35:27 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:24.040 [2024-10-07 07:35:27.856405] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:24.040 [2024-10-07 07:35:27.856487] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:24.040 [2024-10-07 07:35:27.856510] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:24.040 07:35:27 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:16:24.040 07:35:27 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 34c21547-50cd-4f2e-9a59-ec952b1f2fa0 00:16:24.040 07:35:27 -- common/autotest_common.sh@887 -- # local bdev_name=34c21547-50cd-4f2e-9a59-ec952b1f2fa0 00:16:24.040 07:35:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:24.040 07:35:27 -- common/autotest_common.sh@889 -- # local i 00:16:24.040 07:35:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:24.040 07:35:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:24.040 07:35:27 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:24.299 07:35:28 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 34c21547-50cd-4f2e-9a59-ec952b1f2fa0 -t 2000 00:16:24.299 [ 00:16:24.299 { 00:16:24.299 "name": "34c21547-50cd-4f2e-9a59-ec952b1f2fa0", 00:16:24.299 "aliases": [ 00:16:24.299 "lvs/lvol" 00:16:24.299 ], 00:16:24.299 "product_name": "Logical Volume", 00:16:24.299 "block_size": 4096, 00:16:24.299 "num_blocks": 38912, 00:16:24.299 "uuid": "34c21547-50cd-4f2e-9a59-ec952b1f2fa0", 00:16:24.299 "assigned_rate_limits": { 00:16:24.299 "rw_ios_per_sec": 0, 00:16:24.299 "rw_mbytes_per_sec": 0, 00:16:24.299 "r_mbytes_per_sec": 0, 00:16:24.299 "w_mbytes_per_sec": 0 00:16:24.299 }, 00:16:24.299 "claimed": false, 00:16:24.299 "zoned": false, 00:16:24.299 "supported_io_types": { 00:16:24.299 "read": true, 00:16:24.299 "write": true, 00:16:24.299 "unmap": true, 00:16:24.299 "write_zeroes": true, 00:16:24.299 "flush": false, 00:16:24.299 "reset": true, 00:16:24.299 "compare": false, 00:16:24.299 "compare_and_write": false, 00:16:24.299 "abort": false, 00:16:24.299 "nvme_admin": false, 00:16:24.299 "nvme_io": false 00:16:24.299 }, 00:16:24.299 "driver_specific": { 00:16:24.299 "lvol": { 00:16:24.299 "lvol_store_uuid": "896ff595-14da-4630-8dff-953ac762284d", 00:16:24.299 "base_bdev": "aio_bdev", 00:16:24.299 "thin_provision": false, 00:16:24.299 "snapshot": false, 00:16:24.299 "clone": false, 00:16:24.299 "esnap_clone": false 00:16:24.299 } 00:16:24.299 } 00:16:24.299 } 00:16:24.299 ] 00:16:24.299 07:35:28 -- common/autotest_common.sh@895 -- # return 0 00:16:24.299 07:35:28 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 896ff595-14da-4630-8dff-953ac762284d 00:16:24.299 07:35:28 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:16:24.557 07:35:28 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:16:24.557 07:35:28 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 896ff595-14da-4630-8dff-953ac762284d 00:16:24.557 07:35:28 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:16:24.815 07:35:28 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:16:24.815 07:35:28 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:24.815 [2024-10-07 07:35:28.720848] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:24.815 07:35:28 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 896ff595-14da-4630-8dff-953ac762284d 00:16:24.815 07:35:28 -- common/autotest_common.sh@640 -- # local es=0 00:16:24.815 07:35:28 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 896ff595-14da-4630-8dff-953ac762284d 00:16:24.815 07:35:28 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:24.815 07:35:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:24.815 07:35:28 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:24.815 07:35:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:24.815 07:35:28 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:24.815 07:35:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:24.815 07:35:28 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:24.815 07:35:28 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:24.815 07:35:28 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 896ff595-14da-4630-8dff-953ac762284d 00:16:25.074 request: 00:16:25.074 { 00:16:25.074 "uuid": "896ff595-14da-4630-8dff-953ac762284d", 00:16:25.074 "method": "bdev_lvol_get_lvstores", 00:16:25.074 "req_id": 1 00:16:25.074 } 00:16:25.074 Got JSON-RPC error response 00:16:25.074 response: 00:16:25.074 { 00:16:25.074 "code": -19, 00:16:25.074 "message": "No such device" 00:16:25.074 } 00:16:25.074 07:35:28 -- common/autotest_common.sh@643 -- # es=1 00:16:25.074 07:35:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:25.074 07:35:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:25.074 07:35:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:25.074 07:35:28 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:25.333 aio_bdev 00:16:25.333 07:35:29 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 34c21547-50cd-4f2e-9a59-ec952b1f2fa0 00:16:25.333 07:35:29 -- common/autotest_common.sh@887 -- # local bdev_name=34c21547-50cd-4f2e-9a59-ec952b1f2fa0 00:16:25.333 07:35:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:25.333 07:35:29 -- common/autotest_common.sh@889 -- # local i 00:16:25.333 07:35:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:25.333 07:35:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:25.333 07:35:29 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:25.333 07:35:29 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 34c21547-50cd-4f2e-9a59-ec952b1f2fa0 -t 2000 00:16:25.591 [ 00:16:25.591 { 00:16:25.591 "name": "34c21547-50cd-4f2e-9a59-ec952b1f2fa0", 00:16:25.591 "aliases": [ 00:16:25.591 "lvs/lvol" 00:16:25.591 ], 00:16:25.591 "product_name": "Logical Volume", 00:16:25.591 "block_size": 4096, 00:16:25.591 "num_blocks": 38912, 00:16:25.591 "uuid": "34c21547-50cd-4f2e-9a59-ec952b1f2fa0", 00:16:25.591 "assigned_rate_limits": { 00:16:25.591 "rw_ios_per_sec": 0, 00:16:25.591 "rw_mbytes_per_sec": 0, 00:16:25.591 "r_mbytes_per_sec": 0, 00:16:25.591 "w_mbytes_per_sec": 0 00:16:25.591 }, 00:16:25.591 "claimed": false, 00:16:25.591 "zoned": false, 00:16:25.591 "supported_io_types": { 00:16:25.591 "read": true, 00:16:25.591 "write": true, 00:16:25.591 "unmap": true, 00:16:25.591 "write_zeroes": true, 00:16:25.591 "flush": false, 00:16:25.591 "reset": true, 00:16:25.591 "compare": false, 00:16:25.591 "compare_and_write": false, 00:16:25.591 "abort": false, 00:16:25.591 "nvme_admin": false, 00:16:25.591 "nvme_io": false 00:16:25.591 }, 00:16:25.591 "driver_specific": { 00:16:25.591 "lvol": { 00:16:25.591 "lvol_store_uuid": "896ff595-14da-4630-8dff-953ac762284d", 00:16:25.591 "base_bdev": "aio_bdev", 00:16:25.591 "thin_provision": false, 00:16:25.591 "snapshot": false, 00:16:25.591 "clone": false, 00:16:25.591 "esnap_clone": false 00:16:25.591 } 00:16:25.591 } 00:16:25.591 } 00:16:25.591 ] 00:16:25.591 07:35:29 -- common/autotest_common.sh@895 -- # return 0 00:16:25.591 07:35:29 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 896ff595-14da-4630-8dff-953ac762284d 00:16:25.591 07:35:29 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:25.850 07:35:29 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:25.851 07:35:29 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 896ff595-14da-4630-8dff-953ac762284d 00:16:25.851 07:35:29 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:25.851 07:35:29 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:25.851 07:35:29 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 34c21547-50cd-4f2e-9a59-ec952b1f2fa0 00:16:26.110 07:35:29 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 896ff595-14da-4630-8dff-953ac762284d 00:16:26.369 07:35:30 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:26.369 07:35:30 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:26.629 00:16:26.629 real 0m17.376s 00:16:26.629 user 0m44.474s 00:16:26.629 sys 0m3.939s 00:16:26.629 07:35:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.629 07:35:30 -- common/autotest_common.sh@10 -- # set +x 00:16:26.629 ************************************ 00:16:26.629 END TEST lvs_grow_dirty 00:16:26.629 ************************************ 00:16:26.629 07:35:30 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:26.629 07:35:30 -- common/autotest_common.sh@796 -- # type=--id 00:16:26.629 07:35:30 -- common/autotest_common.sh@797 -- # id=0 00:16:26.629 07:35:30 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:16:26.629 07:35:30 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:26.629 07:35:30 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:16:26.629 07:35:30 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:16:26.629 07:35:30 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:16:26.629 07:35:30 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:26.629 nvmf_trace.0 00:16:26.629 07:35:30 -- common/autotest_common.sh@811 -- # return 0 00:16:26.629 07:35:30 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:26.629 07:35:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:26.629 07:35:30 -- nvmf/common.sh@116 -- # sync 00:16:26.629 07:35:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:26.629 07:35:30 -- nvmf/common.sh@119 -- # set +e 00:16:26.629 07:35:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:26.629 07:35:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:26.629 rmmod nvme_tcp 00:16:26.629 rmmod nvme_fabrics 00:16:26.629 rmmod nvme_keyring 00:16:26.629 07:35:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:26.629 07:35:30 -- nvmf/common.sh@123 -- # set -e 00:16:26.629 07:35:30 -- nvmf/common.sh@124 -- # return 0 00:16:26.629 07:35:30 -- nvmf/common.sh@477 -- # '[' -n 4099374 ']' 00:16:26.629 07:35:30 -- nvmf/common.sh@478 -- # killprocess 4099374 00:16:26.629 07:35:30 -- common/autotest_common.sh@926 -- # '[' -z 4099374 ']' 00:16:26.629 07:35:30 -- common/autotest_common.sh@930 -- # kill -0 4099374 00:16:26.629 07:35:30 -- common/autotest_common.sh@931 -- # uname 00:16:26.629 07:35:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:26.629 07:35:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4099374 00:16:26.629 07:35:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:26.629 07:35:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:26.629 07:35:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4099374' 00:16:26.629 killing process with pid 4099374 00:16:26.629 07:35:30 -- common/autotest_common.sh@945 -- # kill 4099374 00:16:26.629 07:35:30 -- common/autotest_common.sh@950 -- # wait 4099374 00:16:26.889 07:35:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:26.889 07:35:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:26.889 07:35:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:26.889 07:35:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.889 07:35:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:26.889 07:35:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.889 07:35:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.889 07:35:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.423 07:35:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:29.423 00:16:29.423 real 0m42.280s 00:16:29.423 user 1m5.698s 00:16:29.423 sys 0m9.811s 00:16:29.423 07:35:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:29.423 07:35:32 -- common/autotest_common.sh@10 -- # set +x 00:16:29.423 ************************************ 00:16:29.423 END TEST nvmf_lvs_grow 00:16:29.423 ************************************ 00:16:29.423 07:35:32 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:29.423 07:35:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:29.423 07:35:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:29.423 07:35:32 -- common/autotest_common.sh@10 -- # set +x 00:16:29.423 ************************************ 00:16:29.423 START TEST nvmf_bdev_io_wait 00:16:29.423 ************************************ 00:16:29.423 07:35:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:29.423 * Looking for test storage... 00:16:29.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:29.423 07:35:32 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:29.423 07:35:32 -- nvmf/common.sh@7 -- # uname -s 00:16:29.423 07:35:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.423 07:35:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.423 07:35:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.423 07:35:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.423 07:35:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.423 07:35:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.423 07:35:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.423 07:35:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.424 07:35:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.424 07:35:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.424 07:35:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:29.424 07:35:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:29.424 07:35:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.424 07:35:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.424 07:35:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:29.424 07:35:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:29.424 07:35:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.424 07:35:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.424 07:35:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.424 07:35:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.424 07:35:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.424 07:35:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.424 07:35:32 -- paths/export.sh@5 -- # export PATH 00:16:29.424 07:35:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.424 07:35:32 -- nvmf/common.sh@46 -- # : 0 00:16:29.424 07:35:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:29.424 07:35:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:29.424 07:35:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:29.424 07:35:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.424 07:35:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.424 07:35:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:29.424 07:35:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:29.424 07:35:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:29.424 07:35:32 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:29.424 07:35:32 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:29.424 07:35:32 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:29.424 07:35:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:29.424 07:35:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.424 07:35:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:29.424 07:35:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:29.424 07:35:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:29.424 07:35:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.424 07:35:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.424 07:35:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.424 07:35:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:29.424 07:35:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:29.424 07:35:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:29.424 07:35:32 -- common/autotest_common.sh@10 -- # set +x 00:16:34.691 07:35:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:34.691 07:35:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:34.691 07:35:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:34.691 07:35:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:34.691 07:35:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:34.691 07:35:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:34.691 07:35:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:34.691 07:35:38 -- nvmf/common.sh@294 -- # net_devs=() 00:16:34.691 07:35:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:34.691 07:35:38 -- nvmf/common.sh@295 -- # e810=() 00:16:34.691 07:35:38 -- nvmf/common.sh@295 -- # local -ga e810 00:16:34.691 07:35:38 -- nvmf/common.sh@296 -- # x722=() 00:16:34.691 07:35:38 -- nvmf/common.sh@296 -- # local -ga x722 00:16:34.691 07:35:38 -- nvmf/common.sh@297 -- # mlx=() 00:16:34.691 07:35:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:34.691 07:35:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.691 07:35:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.691 07:35:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.691 07:35:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.691 07:35:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.691 07:35:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.691 07:35:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.691 07:35:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.691 07:35:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.691 07:35:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.691 07:35:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.691 07:35:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:34.691 07:35:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:34.691 07:35:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:34.691 07:35:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:34.691 07:35:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:34.692 07:35:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:34.692 07:35:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:34.692 07:35:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:34.692 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:34.692 07:35:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:34.692 07:35:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:34.692 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:34.692 07:35:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:34.692 07:35:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:34.692 07:35:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.692 07:35:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:34.692 07:35:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.692 07:35:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:34.692 Found net devices under 0000:af:00.0: cvl_0_0 00:16:34.692 07:35:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.692 07:35:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:34.692 07:35:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.692 07:35:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:34.692 07:35:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.692 07:35:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:34.692 Found net devices under 0000:af:00.1: cvl_0_1 00:16:34.692 07:35:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.692 07:35:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:34.692 07:35:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:34.692 07:35:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:34.692 07:35:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.692 07:35:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.692 07:35:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.692 07:35:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:34.692 07:35:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.692 07:35:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.692 07:35:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:34.692 07:35:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.692 07:35:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.692 07:35:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:34.692 07:35:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:34.692 07:35:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.692 07:35:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.692 07:35:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.692 07:35:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.692 07:35:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:34.692 07:35:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.692 07:35:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.692 07:35:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.692 07:35:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:34.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:16:34.692 00:16:34.692 --- 10.0.0.2 ping statistics --- 00:16:34.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.692 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:16:34.692 07:35:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:16:34.692 00:16:34.692 --- 10.0.0.1 ping statistics --- 00:16:34.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.692 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:16:34.692 07:35:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.692 07:35:38 -- nvmf/common.sh@410 -- # return 0 00:16:34.692 07:35:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:34.692 07:35:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.692 07:35:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:34.692 07:35:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.692 07:35:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:34.692 07:35:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:34.692 07:35:38 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:34.692 07:35:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:34.692 07:35:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:34.692 07:35:38 -- common/autotest_common.sh@10 -- # set +x 00:16:34.692 07:35:38 -- nvmf/common.sh@469 -- # nvmfpid=4103413 00:16:34.692 07:35:38 -- nvmf/common.sh@470 -- # waitforlisten 4103413 00:16:34.692 07:35:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:34.692 07:35:38 -- common/autotest_common.sh@819 -- # '[' -z 4103413 ']' 00:16:34.692 07:35:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.692 07:35:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:34.692 07:35:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.692 07:35:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:34.692 07:35:38 -- common/autotest_common.sh@10 -- # set +x 00:16:34.692 [2024-10-07 07:35:38.464944] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:34.692 [2024-10-07 07:35:38.464992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.692 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.692 [2024-10-07 07:35:38.525246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.692 [2024-10-07 07:35:38.600906] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:34.692 [2024-10-07 07:35:38.601019] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.692 [2024-10-07 07:35:38.601027] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.692 [2024-10-07 07:35:38.601033] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.692 [2024-10-07 07:35:38.601094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.692 [2024-10-07 07:35:38.601141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.692 [2024-10-07 07:35:38.601232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.692 [2024-10-07 07:35:38.601233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.628 07:35:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:35.628 07:35:39 -- common/autotest_common.sh@852 -- # return 0 00:16:35.628 07:35:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:35.628 07:35:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:35.628 07:35:39 -- common/autotest_common.sh@10 -- # set +x 00:16:35.628 07:35:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:35.628 07:35:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.628 07:35:39 -- common/autotest_common.sh@10 -- # set +x 00:16:35.628 07:35:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:35.628 07:35:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.628 07:35:39 -- common/autotest_common.sh@10 -- # set +x 00:16:35.628 07:35:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:35.628 07:35:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.628 07:35:39 -- common/autotest_common.sh@10 -- # set +x 00:16:35.628 [2024-10-07 07:35:39.402492] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.628 07:35:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:35.628 07:35:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.628 07:35:39 -- common/autotest_common.sh@10 -- # set +x 00:16:35.628 Malloc0 00:16:35.628 07:35:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:35.628 07:35:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.628 07:35:39 -- common/autotest_common.sh@10 -- # set +x 00:16:35.628 07:35:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:35.628 07:35:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.628 07:35:39 -- common/autotest_common.sh@10 -- # set +x 00:16:35.628 07:35:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.628 07:35:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.628 07:35:39 -- common/autotest_common.sh@10 -- # set +x 00:16:35.628 [2024-10-07 07:35:39.457436] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.628 07:35:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4103622 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@30 -- # READ_PID=4103624 00:16:35.628 07:35:39 -- nvmf/common.sh@520 -- # config=() 00:16:35.628 07:35:39 -- nvmf/common.sh@520 -- # local subsystem config 00:16:35.628 07:35:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:35.628 07:35:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:35.628 { 00:16:35.628 "params": { 00:16:35.628 "name": "Nvme$subsystem", 00:16:35.628 "trtype": "$TEST_TRANSPORT", 00:16:35.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:35.628 "adrfam": "ipv4", 00:16:35.628 "trsvcid": "$NVMF_PORT", 00:16:35.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:35.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:35.628 "hdgst": ${hdgst:-false}, 00:16:35.628 "ddgst": ${ddgst:-false} 00:16:35.628 }, 00:16:35.628 "method": "bdev_nvme_attach_controller" 00:16:35.628 } 00:16:35.628 EOF 00:16:35.628 )") 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4103626 00:16:35.628 07:35:39 -- nvmf/common.sh@520 -- # config=() 00:16:35.628 07:35:39 -- nvmf/common.sh@520 -- # local subsystem config 00:16:35.628 07:35:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:35.628 07:35:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:35.628 { 00:16:35.628 "params": { 00:16:35.628 "name": "Nvme$subsystem", 00:16:35.628 "trtype": "$TEST_TRANSPORT", 00:16:35.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:35.628 "adrfam": "ipv4", 00:16:35.628 "trsvcid": "$NVMF_PORT", 00:16:35.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:35.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:35.628 "hdgst": ${hdgst:-false}, 00:16:35.628 "ddgst": ${ddgst:-false} 00:16:35.628 }, 00:16:35.628 "method": "bdev_nvme_attach_controller" 00:16:35.628 } 00:16:35.628 EOF 00:16:35.628 )") 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4103629 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:35.628 07:35:39 -- target/bdev_io_wait.sh@35 -- # sync 00:16:35.628 07:35:39 -- nvmf/common.sh@520 -- # config=() 00:16:35.628 07:35:39 -- nvmf/common.sh@542 -- # cat 00:16:35.629 07:35:39 -- nvmf/common.sh@520 -- # local subsystem config 00:16:35.629 07:35:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:35.629 07:35:39 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:35.629 07:35:39 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:35.629 07:35:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:35.629 { 00:16:35.629 "params": { 00:16:35.629 "name": "Nvme$subsystem", 00:16:35.629 "trtype": "$TEST_TRANSPORT", 00:16:35.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:35.629 "adrfam": "ipv4", 00:16:35.629 "trsvcid": "$NVMF_PORT", 00:16:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:35.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:35.629 "hdgst": ${hdgst:-false}, 00:16:35.629 "ddgst": ${ddgst:-false} 00:16:35.629 }, 00:16:35.629 "method": "bdev_nvme_attach_controller" 00:16:35.629 } 00:16:35.629 EOF 00:16:35.629 )") 00:16:35.629 07:35:39 -- nvmf/common.sh@520 -- # config=() 00:16:35.629 07:35:39 -- nvmf/common.sh@542 -- # cat 00:16:35.629 07:35:39 -- nvmf/common.sh@520 -- # local subsystem config 00:16:35.629 07:35:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:35.629 07:35:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:35.629 { 00:16:35.629 "params": { 00:16:35.629 "name": "Nvme$subsystem", 00:16:35.629 "trtype": "$TEST_TRANSPORT", 00:16:35.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:35.629 "adrfam": "ipv4", 00:16:35.629 "trsvcid": "$NVMF_PORT", 00:16:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:35.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:35.629 "hdgst": ${hdgst:-false}, 00:16:35.629 "ddgst": ${ddgst:-false} 00:16:35.629 }, 00:16:35.629 "method": "bdev_nvme_attach_controller" 00:16:35.629 } 00:16:35.629 EOF 00:16:35.629 )") 00:16:35.629 07:35:39 -- nvmf/common.sh@542 -- # cat 00:16:35.629 07:35:39 -- target/bdev_io_wait.sh@37 -- # wait 4103622 00:16:35.629 07:35:39 -- nvmf/common.sh@542 -- # cat 00:16:35.629 07:35:39 -- nvmf/common.sh@544 -- # jq . 00:16:35.629 07:35:39 -- nvmf/common.sh@544 -- # jq . 00:16:35.629 07:35:39 -- nvmf/common.sh@544 -- # jq . 00:16:35.629 07:35:39 -- nvmf/common.sh@545 -- # IFS=, 00:16:35.629 07:35:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:35.629 "params": { 00:16:35.629 "name": "Nvme1", 00:16:35.629 "trtype": "tcp", 00:16:35.629 "traddr": "10.0.0.2", 00:16:35.629 "adrfam": "ipv4", 00:16:35.629 "trsvcid": "4420", 00:16:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:35.629 "hdgst": false, 00:16:35.629 "ddgst": false 00:16:35.629 }, 00:16:35.629 "method": "bdev_nvme_attach_controller" 00:16:35.629 }' 00:16:35.629 07:35:39 -- nvmf/common.sh@544 -- # jq . 00:16:35.629 07:35:39 -- nvmf/common.sh@545 -- # IFS=, 00:16:35.629 07:35:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:35.629 "params": { 00:16:35.629 "name": "Nvme1", 00:16:35.629 "trtype": "tcp", 00:16:35.629 "traddr": "10.0.0.2", 00:16:35.629 "adrfam": "ipv4", 00:16:35.629 "trsvcid": "4420", 00:16:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:35.629 "hdgst": false, 00:16:35.629 "ddgst": false 00:16:35.629 }, 00:16:35.629 "method": "bdev_nvme_attach_controller" 00:16:35.629 }' 00:16:35.629 07:35:39 -- nvmf/common.sh@545 -- # IFS=, 00:16:35.629 07:35:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:35.629 "params": { 00:16:35.629 "name": "Nvme1", 00:16:35.629 "trtype": "tcp", 00:16:35.629 "traddr": "10.0.0.2", 00:16:35.629 "adrfam": "ipv4", 00:16:35.629 "trsvcid": "4420", 00:16:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:35.629 "hdgst": false, 00:16:35.629 "ddgst": false 00:16:35.629 }, 00:16:35.629 "method": "bdev_nvme_attach_controller" 00:16:35.629 }' 00:16:35.629 07:35:39 -- nvmf/common.sh@545 -- # IFS=, 00:16:35.629 07:35:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:35.629 "params": { 00:16:35.629 "name": "Nvme1", 00:16:35.629 "trtype": "tcp", 00:16:35.629 "traddr": "10.0.0.2", 00:16:35.629 "adrfam": "ipv4", 00:16:35.629 "trsvcid": "4420", 00:16:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:35.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:35.629 "hdgst": false, 00:16:35.629 "ddgst": false 00:16:35.629 }, 00:16:35.629 "method": "bdev_nvme_attach_controller" 00:16:35.629 }' 00:16:35.629 [2024-10-07 07:35:39.503474] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:35.629 [2024-10-07 07:35:39.503515] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:35.629 [2024-10-07 07:35:39.504125] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:35.629 [2024-10-07 07:35:39.504173] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:35.629 [2024-10-07 07:35:39.504873] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:35.629 [2024-10-07 07:35:39.504914] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:35.629 [2024-10-07 07:35:39.505142] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:35.629 [2024-10-07 07:35:39.505180] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:35.629 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.888 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.888 [2024-10-07 07:35:39.685522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.888 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.888 [2024-10-07 07:35:39.763651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:16:35.888 [2024-10-07 07:35:39.777070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.888 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.888 [2024-10-07 07:35:39.856969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:36.146 [2024-10-07 07:35:39.908956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.146 [2024-10-07 07:35:39.958627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.146 [2024-10-07 07:35:39.995930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:36.146 [2024-10-07 07:35:40.036457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:36.404 Running I/O for 1 seconds... 00:16:36.404 Running I/O for 1 seconds... 00:16:36.404 Running I/O for 1 seconds... 00:16:36.404 Running I/O for 1 seconds... 00:16:37.340 00:16:37.340 Latency(us) 00:16:37.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.340 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:37.340 Nvme1n1 : 1.00 16360.61 63.91 0.00 0.00 7805.13 3994.58 16227.96 00:16:37.340 =================================================================================================================== 00:16:37.340 Total : 16360.61 63.91 0.00 0.00 7805.13 3994.58 16227.96 00:16:37.340 00:16:37.340 Latency(us) 00:16:37.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.340 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:37.340 Nvme1n1 : 1.01 6632.66 25.91 0.00 0.00 19175.83 9175.04 31207.62 00:16:37.340 =================================================================================================================== 00:16:37.340 Total : 6632.66 25.91 0.00 0.00 19175.83 9175.04 31207.62 00:16:37.340 00:16:37.340 Latency(us) 00:16:37.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.340 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:37.340 Nvme1n1 : 1.00 256857.58 1003.35 0.00 0.00 496.70 206.75 678.77 00:16:37.340 =================================================================================================================== 00:16:37.340 Total : 256857.58 1003.35 0.00 0.00 496.70 206.75 678.77 00:16:37.599 00:16:37.599 Latency(us) 00:16:37.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.599 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:37.599 Nvme1n1 : 1.01 6986.84 27.29 0.00 0.00 18262.87 5835.82 42941.68 00:16:37.599 =================================================================================================================== 00:16:37.599 Total : 6986.84 27.29 0.00 0.00 18262.87 5835.82 42941.68 00:16:37.599 07:35:41 -- target/bdev_io_wait.sh@38 -- # wait 4103624 00:16:37.858 07:35:41 -- target/bdev_io_wait.sh@39 -- # wait 4103626 00:16:37.858 07:35:41 -- target/bdev_io_wait.sh@40 -- # wait 4103629 00:16:37.858 07:35:41 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.858 07:35:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:37.858 07:35:41 -- common/autotest_common.sh@10 -- # set +x 00:16:37.858 07:35:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:37.858 07:35:41 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:37.858 07:35:41 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:37.858 07:35:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:37.858 07:35:41 -- nvmf/common.sh@116 -- # sync 00:16:37.858 07:35:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:37.858 07:35:41 -- nvmf/common.sh@119 -- # set +e 00:16:37.858 07:35:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:37.858 07:35:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:37.858 rmmod nvme_tcp 00:16:37.858 rmmod nvme_fabrics 00:16:37.858 rmmod nvme_keyring 00:16:37.858 07:35:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:37.858 07:35:41 -- nvmf/common.sh@123 -- # set -e 00:16:37.858 07:35:41 -- nvmf/common.sh@124 -- # return 0 00:16:37.858 07:35:41 -- nvmf/common.sh@477 -- # '[' -n 4103413 ']' 00:16:37.858 07:35:41 -- nvmf/common.sh@478 -- # killprocess 4103413 00:16:37.858 07:35:41 -- common/autotest_common.sh@926 -- # '[' -z 4103413 ']' 00:16:37.858 07:35:41 -- common/autotest_common.sh@930 -- # kill -0 4103413 00:16:37.858 07:35:41 -- common/autotest_common.sh@931 -- # uname 00:16:37.858 07:35:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:37.858 07:35:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4103413 00:16:37.858 07:35:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:37.858 07:35:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:37.858 07:35:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4103413' 00:16:37.858 killing process with pid 4103413 00:16:37.858 07:35:41 -- common/autotest_common.sh@945 -- # kill 4103413 00:16:37.858 07:35:41 -- common/autotest_common.sh@950 -- # wait 4103413 00:16:38.117 07:35:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:38.117 07:35:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:38.117 07:35:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:38.117 07:35:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:38.117 07:35:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:38.117 07:35:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.117 07:35:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.117 07:35:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.653 07:35:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:40.653 00:16:40.653 real 0m11.191s 00:16:40.653 user 0m20.739s 00:16:40.653 sys 0m5.792s 00:16:40.653 07:35:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.653 07:35:44 -- common/autotest_common.sh@10 -- # set +x 00:16:40.653 ************************************ 00:16:40.653 END TEST nvmf_bdev_io_wait 00:16:40.653 ************************************ 00:16:40.653 07:35:44 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:40.653 07:35:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:40.653 07:35:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:40.653 07:35:44 -- common/autotest_common.sh@10 -- # set +x 00:16:40.653 ************************************ 00:16:40.653 START TEST nvmf_queue_depth 00:16:40.653 ************************************ 00:16:40.653 07:35:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:40.653 * Looking for test storage... 00:16:40.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:40.653 07:35:44 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.653 07:35:44 -- nvmf/common.sh@7 -- # uname -s 00:16:40.653 07:35:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.653 07:35:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.653 07:35:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.653 07:35:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.653 07:35:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.653 07:35:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.653 07:35:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.653 07:35:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.653 07:35:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.653 07:35:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.653 07:35:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:40.654 07:35:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:40.654 07:35:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.654 07:35:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.654 07:35:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.654 07:35:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:40.654 07:35:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.654 07:35:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.654 07:35:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.654 07:35:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.654 07:35:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.654 07:35:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.654 07:35:44 -- paths/export.sh@5 -- # export PATH 00:16:40.654 07:35:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.654 07:35:44 -- nvmf/common.sh@46 -- # : 0 00:16:40.654 07:35:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:40.654 07:35:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:40.654 07:35:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:40.654 07:35:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.654 07:35:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.654 07:35:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:40.654 07:35:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:40.654 07:35:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:40.654 07:35:44 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:40.654 07:35:44 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:40.654 07:35:44 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:40.654 07:35:44 -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:40.654 07:35:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:40.654 07:35:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.654 07:35:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:40.654 07:35:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:40.654 07:35:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:40.654 07:35:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.654 07:35:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.654 07:35:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.654 07:35:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:40.654 07:35:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:40.654 07:35:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:40.654 07:35:44 -- common/autotest_common.sh@10 -- # set +x 00:16:45.950 07:35:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:45.950 07:35:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:45.950 07:35:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:45.950 07:35:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:45.950 07:35:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:45.950 07:35:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:45.950 07:35:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:45.950 07:35:49 -- nvmf/common.sh@294 -- # net_devs=() 00:16:45.950 07:35:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:45.950 07:35:49 -- nvmf/common.sh@295 -- # e810=() 00:16:45.950 07:35:49 -- nvmf/common.sh@295 -- # local -ga e810 00:16:45.950 07:35:49 -- nvmf/common.sh@296 -- # x722=() 00:16:45.950 07:35:49 -- nvmf/common.sh@296 -- # local -ga x722 00:16:45.951 07:35:49 -- nvmf/common.sh@297 -- # mlx=() 00:16:45.951 07:35:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:45.951 07:35:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.951 07:35:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.951 07:35:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.951 07:35:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.951 07:35:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.951 07:35:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.951 07:35:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.951 07:35:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.951 07:35:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.951 07:35:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.951 07:35:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.951 07:35:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:45.951 07:35:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:45.951 07:35:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:45.951 07:35:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:45.951 07:35:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:45.951 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:45.951 07:35:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:45.951 07:35:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:45.951 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:45.951 07:35:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:45.951 07:35:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:45.951 07:35:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.951 07:35:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:45.951 07:35:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.951 07:35:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:45.951 Found net devices under 0000:af:00.0: cvl_0_0 00:16:45.951 07:35:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.951 07:35:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:45.951 07:35:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.951 07:35:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:45.951 07:35:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.951 07:35:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:45.951 Found net devices under 0000:af:00.1: cvl_0_1 00:16:45.951 07:35:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.951 07:35:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:45.951 07:35:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:45.951 07:35:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:45.951 07:35:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.951 07:35:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.951 07:35:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.951 07:35:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:45.951 07:35:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.951 07:35:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.951 07:35:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:45.951 07:35:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.951 07:35:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.951 07:35:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:45.951 07:35:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:45.951 07:35:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:45.951 07:35:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:45.951 07:35:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:45.951 07:35:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:45.951 07:35:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:45.951 07:35:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:45.951 07:35:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:45.951 07:35:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:45.951 07:35:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:45.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:16:45.951 00:16:45.951 --- 10.0.0.2 ping statistics --- 00:16:45.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.951 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:16:45.951 07:35:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:45.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:16:45.951 00:16:45.951 --- 10.0.0.1 ping statistics --- 00:16:45.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.951 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:16:45.951 07:35:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.951 07:35:49 -- nvmf/common.sh@410 -- # return 0 00:16:45.951 07:35:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:45.951 07:35:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.951 07:35:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:45.951 07:35:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.951 07:35:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:45.951 07:35:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:45.951 07:35:49 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:45.952 07:35:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:45.952 07:35:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:45.952 07:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:45.952 07:35:49 -- nvmf/common.sh@469 -- # nvmfpid=4107571 00:16:45.952 07:35:49 -- nvmf/common.sh@470 -- # waitforlisten 4107571 00:16:45.952 07:35:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:45.952 07:35:49 -- common/autotest_common.sh@819 -- # '[' -z 4107571 ']' 00:16:45.952 07:35:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.952 07:35:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:45.952 07:35:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.952 07:35:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:45.952 07:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:46.210 [2024-10-07 07:35:49.946763] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:46.211 [2024-10-07 07:35:49.946806] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.211 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.211 [2024-10-07 07:35:50.006094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.211 [2024-10-07 07:35:50.093379] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:46.211 [2024-10-07 07:35:50.093486] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.211 [2024-10-07 07:35:50.093495] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.211 [2024-10-07 07:35:50.093501] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.211 [2024-10-07 07:35:50.093517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.145 07:35:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:47.145 07:35:50 -- common/autotest_common.sh@852 -- # return 0 00:16:47.145 07:35:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:47.145 07:35:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:47.145 07:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:47.145 07:35:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.145 07:35:50 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:47.145 07:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:47.145 07:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:47.145 [2024-10-07 07:35:50.806906] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.145 07:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:47.145 07:35:50 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:47.145 07:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:47.145 07:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:47.145 Malloc0 00:16:47.145 07:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:47.145 07:35:50 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:47.145 07:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:47.145 07:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:47.145 07:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:47.145 07:35:50 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:47.145 07:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:47.145 07:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:47.145 07:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:47.145 07:35:50 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.145 07:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:47.145 07:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:47.145 [2024-10-07 07:35:50.860876] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.145 07:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:47.145 07:35:50 -- target/queue_depth.sh@30 -- # bdevperf_pid=4107642 00:16:47.145 07:35:50 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:47.145 07:35:50 -- target/queue_depth.sh@33 -- # waitforlisten 4107642 /var/tmp/bdevperf.sock 00:16:47.145 07:35:50 -- common/autotest_common.sh@819 -- # '[' -z 4107642 ']' 00:16:47.145 07:35:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:47.145 07:35:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:47.145 07:35:50 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:47.145 07:35:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:47.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:47.145 07:35:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:47.145 07:35:50 -- common/autotest_common.sh@10 -- # set +x 00:16:47.145 [2024-10-07 07:35:50.907604] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:47.145 [2024-10-07 07:35:50.907645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4107642 ] 00:16:47.145 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.145 [2024-10-07 07:35:50.963013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.145 [2024-10-07 07:35:51.038320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.080 07:35:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:48.080 07:35:51 -- common/autotest_common.sh@852 -- # return 0 00:16:48.080 07:35:51 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:48.080 07:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:48.080 07:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:48.080 NVMe0n1 00:16:48.080 07:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:48.080 07:35:51 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:48.080 Running I/O for 10 seconds... 00:16:58.055 00:16:58.055 Latency(us) 00:16:58.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.055 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:58.055 Verification LBA range: start 0x0 length 0x4000 00:16:58.055 NVMe0n1 : 10.05 18954.17 74.04 0.00 0.00 53875.99 10423.34 40694.74 00:16:58.055 =================================================================================================================== 00:16:58.055 Total : 18954.17 74.04 0.00 0.00 53875.99 10423.34 40694.74 00:16:58.055 0 00:16:58.055 07:36:02 -- target/queue_depth.sh@39 -- # killprocess 4107642 00:16:58.055 07:36:02 -- common/autotest_common.sh@926 -- # '[' -z 4107642 ']' 00:16:58.055 07:36:02 -- common/autotest_common.sh@930 -- # kill -0 4107642 00:16:58.055 07:36:02 -- common/autotest_common.sh@931 -- # uname 00:16:58.055 07:36:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:58.313 07:36:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4107642 00:16:58.313 07:36:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:58.313 07:36:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:58.313 07:36:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4107642' 00:16:58.313 killing process with pid 4107642 00:16:58.313 07:36:02 -- common/autotest_common.sh@945 -- # kill 4107642 00:16:58.313 Received shutdown signal, test time was about 10.000000 seconds 00:16:58.313 00:16:58.313 Latency(us) 00:16:58.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.313 =================================================================================================================== 00:16:58.313 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:58.313 07:36:02 -- common/autotest_common.sh@950 -- # wait 4107642 00:16:58.313 07:36:02 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:58.313 07:36:02 -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:58.313 07:36:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:58.313 07:36:02 -- nvmf/common.sh@116 -- # sync 00:16:58.313 07:36:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:58.313 07:36:02 -- nvmf/common.sh@119 -- # set +e 00:16:58.313 07:36:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:58.313 07:36:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:58.573 rmmod nvme_tcp 00:16:58.573 rmmod nvme_fabrics 00:16:58.573 rmmod nvme_keyring 00:16:58.573 07:36:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:58.573 07:36:02 -- nvmf/common.sh@123 -- # set -e 00:16:58.573 07:36:02 -- nvmf/common.sh@124 -- # return 0 00:16:58.573 07:36:02 -- nvmf/common.sh@477 -- # '[' -n 4107571 ']' 00:16:58.573 07:36:02 -- nvmf/common.sh@478 -- # killprocess 4107571 00:16:58.573 07:36:02 -- common/autotest_common.sh@926 -- # '[' -z 4107571 ']' 00:16:58.573 07:36:02 -- common/autotest_common.sh@930 -- # kill -0 4107571 00:16:58.573 07:36:02 -- common/autotest_common.sh@931 -- # uname 00:16:58.573 07:36:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:58.573 07:36:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4107571 00:16:58.573 07:36:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:58.573 07:36:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:58.573 07:36:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4107571' 00:16:58.573 killing process with pid 4107571 00:16:58.573 07:36:02 -- common/autotest_common.sh@945 -- # kill 4107571 00:16:58.573 07:36:02 -- common/autotest_common.sh@950 -- # wait 4107571 00:16:58.832 07:36:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:58.832 07:36:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:58.832 07:36:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:58.832 07:36:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.832 07:36:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:58.832 07:36:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.832 07:36:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.832 07:36:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.736 07:36:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:00.736 00:17:00.736 real 0m20.619s 00:17:00.736 user 0m25.013s 00:17:00.736 sys 0m5.894s 00:17:00.736 07:36:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.736 07:36:04 -- common/autotest_common.sh@10 -- # set +x 00:17:00.736 ************************************ 00:17:00.736 END TEST nvmf_queue_depth 00:17:00.736 ************************************ 00:17:00.995 07:36:04 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:00.995 07:36:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:00.995 07:36:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:00.995 07:36:04 -- common/autotest_common.sh@10 -- # set +x 00:17:00.995 ************************************ 00:17:00.995 START TEST nvmf_multipath 00:17:00.995 ************************************ 00:17:00.995 07:36:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:00.995 * Looking for test storage... 00:17:00.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.995 07:36:04 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.995 07:36:04 -- nvmf/common.sh@7 -- # uname -s 00:17:00.995 07:36:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.995 07:36:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.995 07:36:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.995 07:36:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.995 07:36:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.995 07:36:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.995 07:36:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.995 07:36:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.995 07:36:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.995 07:36:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.995 07:36:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:00.995 07:36:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:00.995 07:36:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.995 07:36:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.995 07:36:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.995 07:36:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.995 07:36:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.995 07:36:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.995 07:36:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.995 07:36:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.995 07:36:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.995 07:36:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.995 07:36:04 -- paths/export.sh@5 -- # export PATH 00:17:00.995 07:36:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.995 07:36:04 -- nvmf/common.sh@46 -- # : 0 00:17:00.995 07:36:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:00.995 07:36:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:00.995 07:36:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:00.995 07:36:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.995 07:36:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.995 07:36:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:00.995 07:36:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:00.995 07:36:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:00.995 07:36:04 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:00.995 07:36:04 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:00.995 07:36:04 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:00.995 07:36:04 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.995 07:36:04 -- target/multipath.sh@43 -- # nvmftestinit 00:17:00.995 07:36:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:00.995 07:36:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.995 07:36:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:00.995 07:36:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:00.995 07:36:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:00.995 07:36:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.995 07:36:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.995 07:36:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.995 07:36:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:00.995 07:36:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:00.995 07:36:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:00.995 07:36:04 -- common/autotest_common.sh@10 -- # set +x 00:17:06.357 07:36:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:06.357 07:36:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:06.357 07:36:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:06.357 07:36:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:06.357 07:36:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:06.357 07:36:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:06.357 07:36:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:06.357 07:36:10 -- nvmf/common.sh@294 -- # net_devs=() 00:17:06.357 07:36:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:06.357 07:36:10 -- nvmf/common.sh@295 -- # e810=() 00:17:06.357 07:36:10 -- nvmf/common.sh@295 -- # local -ga e810 00:17:06.357 07:36:10 -- nvmf/common.sh@296 -- # x722=() 00:17:06.357 07:36:10 -- nvmf/common.sh@296 -- # local -ga x722 00:17:06.357 07:36:10 -- nvmf/common.sh@297 -- # mlx=() 00:17:06.357 07:36:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:06.357 07:36:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.357 07:36:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.357 07:36:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.357 07:36:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.357 07:36:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.357 07:36:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.357 07:36:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.357 07:36:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.357 07:36:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.357 07:36:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.357 07:36:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.357 07:36:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:06.357 07:36:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:06.357 07:36:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:06.357 07:36:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:06.357 07:36:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:06.357 07:36:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:06.357 07:36:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:06.357 07:36:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:06.357 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:06.357 07:36:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:06.357 07:36:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:06.357 07:36:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.357 07:36:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.358 07:36:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:06.358 07:36:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:06.358 07:36:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:06.358 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:06.358 07:36:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:06.358 07:36:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:06.358 07:36:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.358 07:36:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.358 07:36:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:06.358 07:36:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:06.358 07:36:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:06.358 07:36:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:06.358 07:36:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:06.358 07:36:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.358 07:36:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:06.358 07:36:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.358 07:36:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:06.358 Found net devices under 0000:af:00.0: cvl_0_0 00:17:06.358 07:36:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.358 07:36:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:06.358 07:36:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.358 07:36:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:06.358 07:36:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.358 07:36:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:06.358 Found net devices under 0000:af:00.1: cvl_0_1 00:17:06.358 07:36:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.358 07:36:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:06.358 07:36:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:06.358 07:36:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:06.358 07:36:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:06.358 07:36:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:06.358 07:36:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.358 07:36:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.358 07:36:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.358 07:36:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:06.358 07:36:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.358 07:36:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.358 07:36:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:06.358 07:36:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.358 07:36:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.358 07:36:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:06.358 07:36:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:06.358 07:36:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.358 07:36:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.358 07:36:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.358 07:36:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.358 07:36:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:06.358 07:36:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.358 07:36:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.358 07:36:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.358 07:36:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:06.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:17:06.358 00:17:06.358 --- 10.0.0.2 ping statistics --- 00:17:06.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.358 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:17:06.358 07:36:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:17:06.358 00:17:06.358 --- 10.0.0.1 ping statistics --- 00:17:06.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.358 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:17:06.358 07:36:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.358 07:36:10 -- nvmf/common.sh@410 -- # return 0 00:17:06.358 07:36:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:06.358 07:36:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.358 07:36:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:06.358 07:36:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:06.358 07:36:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.358 07:36:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:06.358 07:36:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:06.358 07:36:10 -- target/multipath.sh@45 -- # '[' -z ']' 00:17:06.358 07:36:10 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:06.358 only one NIC for nvmf test 00:17:06.358 07:36:10 -- target/multipath.sh@47 -- # nvmftestfini 00:17:06.358 07:36:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:06.358 07:36:10 -- nvmf/common.sh@116 -- # sync 00:17:06.358 07:36:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:06.358 07:36:10 -- nvmf/common.sh@119 -- # set +e 00:17:06.358 07:36:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:06.358 07:36:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:06.358 rmmod nvme_tcp 00:17:06.617 rmmod nvme_fabrics 00:17:06.617 rmmod nvme_keyring 00:17:06.617 07:36:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:06.617 07:36:10 -- nvmf/common.sh@123 -- # set -e 00:17:06.617 07:36:10 -- nvmf/common.sh@124 -- # return 0 00:17:06.617 07:36:10 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:17:06.617 07:36:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:06.617 07:36:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:06.617 07:36:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:06.617 07:36:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.617 07:36:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:06.617 07:36:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.617 07:36:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.617 07:36:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.522 07:36:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:08.522 07:36:12 -- target/multipath.sh@48 -- # exit 0 00:17:08.522 07:36:12 -- target/multipath.sh@1 -- # nvmftestfini 00:17:08.522 07:36:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:08.522 07:36:12 -- nvmf/common.sh@116 -- # sync 00:17:08.522 07:36:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:08.522 07:36:12 -- nvmf/common.sh@119 -- # set +e 00:17:08.522 07:36:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:08.522 07:36:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:08.522 07:36:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:08.522 07:36:12 -- nvmf/common.sh@123 -- # set -e 00:17:08.522 07:36:12 -- nvmf/common.sh@124 -- # return 0 00:17:08.522 07:36:12 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:17:08.522 07:36:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:08.522 07:36:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:08.522 07:36:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:08.522 07:36:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.522 07:36:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:08.522 07:36:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.522 07:36:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.522 07:36:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.522 07:36:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:08.522 00:17:08.522 real 0m7.744s 00:17:08.522 user 0m1.655s 00:17:08.522 sys 0m4.061s 00:17:08.522 07:36:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.522 07:36:12 -- common/autotest_common.sh@10 -- # set +x 00:17:08.522 ************************************ 00:17:08.522 END TEST nvmf_multipath 00:17:08.522 ************************************ 00:17:08.782 07:36:12 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:08.782 07:36:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:08.782 07:36:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:08.782 07:36:12 -- common/autotest_common.sh@10 -- # set +x 00:17:08.782 ************************************ 00:17:08.782 START TEST nvmf_zcopy 00:17:08.782 ************************************ 00:17:08.782 07:36:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:08.782 * Looking for test storage... 00:17:08.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.782 07:36:12 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.782 07:36:12 -- nvmf/common.sh@7 -- # uname -s 00:17:08.782 07:36:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.782 07:36:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.782 07:36:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.782 07:36:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.782 07:36:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.782 07:36:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.782 07:36:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.782 07:36:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.782 07:36:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.782 07:36:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.782 07:36:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:08.782 07:36:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:08.782 07:36:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.782 07:36:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.782 07:36:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.782 07:36:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.782 07:36:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.782 07:36:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.782 07:36:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.782 07:36:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.782 07:36:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.782 07:36:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.782 07:36:12 -- paths/export.sh@5 -- # export PATH 00:17:08.782 07:36:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.782 07:36:12 -- nvmf/common.sh@46 -- # : 0 00:17:08.782 07:36:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:08.782 07:36:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:08.782 07:36:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:08.782 07:36:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.782 07:36:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.782 07:36:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:08.782 07:36:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:08.782 07:36:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:08.782 07:36:12 -- target/zcopy.sh@12 -- # nvmftestinit 00:17:08.782 07:36:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:08.782 07:36:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.782 07:36:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:08.782 07:36:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:08.782 07:36:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:08.782 07:36:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.782 07:36:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.782 07:36:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.782 07:36:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:08.782 07:36:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:08.782 07:36:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:08.782 07:36:12 -- common/autotest_common.sh@10 -- # set +x 00:17:15.347 07:36:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:15.347 07:36:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:15.347 07:36:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:15.347 07:36:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:15.347 07:36:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:15.347 07:36:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:15.347 07:36:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:15.347 07:36:18 -- nvmf/common.sh@294 -- # net_devs=() 00:17:15.347 07:36:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:15.347 07:36:18 -- nvmf/common.sh@295 -- # e810=() 00:17:15.347 07:36:18 -- nvmf/common.sh@295 -- # local -ga e810 00:17:15.347 07:36:18 -- nvmf/common.sh@296 -- # x722=() 00:17:15.347 07:36:18 -- nvmf/common.sh@296 -- # local -ga x722 00:17:15.347 07:36:18 -- nvmf/common.sh@297 -- # mlx=() 00:17:15.347 07:36:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:15.347 07:36:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.347 07:36:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.347 07:36:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.347 07:36:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.347 07:36:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.347 07:36:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.347 07:36:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.347 07:36:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.347 07:36:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.347 07:36:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.347 07:36:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.347 07:36:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:15.347 07:36:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:15.347 07:36:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:15.347 07:36:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:15.347 07:36:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:15.347 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:15.347 07:36:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:15.347 07:36:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:15.347 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:15.347 07:36:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:15.347 07:36:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:15.347 07:36:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.347 07:36:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:15.347 07:36:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.347 07:36:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:15.347 Found net devices under 0000:af:00.0: cvl_0_0 00:17:15.347 07:36:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.347 07:36:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:15.347 07:36:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.347 07:36:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:15.347 07:36:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.347 07:36:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:15.347 Found net devices under 0000:af:00.1: cvl_0_1 00:17:15.347 07:36:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.347 07:36:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:15.347 07:36:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:15.347 07:36:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:15.347 07:36:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.347 07:36:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.347 07:36:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.347 07:36:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:15.347 07:36:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.347 07:36:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.347 07:36:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:15.347 07:36:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.347 07:36:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.347 07:36:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:15.347 07:36:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:15.347 07:36:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.347 07:36:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:15.347 07:36:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:15.347 07:36:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:15.347 07:36:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:15.347 07:36:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:15.347 07:36:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:15.347 07:36:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:15.347 07:36:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:15.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:17:15.347 00:17:15.347 --- 10.0.0.2 ping statistics --- 00:17:15.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.347 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:17:15.347 07:36:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:15.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:17:15.347 00:17:15.347 --- 10.0.0.1 ping statistics --- 00:17:15.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.347 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:17:15.347 07:36:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.347 07:36:18 -- nvmf/common.sh@410 -- # return 0 00:17:15.347 07:36:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:15.347 07:36:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.347 07:36:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:15.347 07:36:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:15.348 07:36:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.348 07:36:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:15.348 07:36:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:15.348 07:36:18 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:15.348 07:36:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:15.348 07:36:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:15.348 07:36:18 -- common/autotest_common.sh@10 -- # set +x 00:17:15.348 07:36:18 -- nvmf/common.sh@469 -- # nvmfpid=4116876 00:17:15.348 07:36:18 -- nvmf/common.sh@470 -- # waitforlisten 4116876 00:17:15.348 07:36:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:15.348 07:36:18 -- common/autotest_common.sh@819 -- # '[' -z 4116876 ']' 00:17:15.348 07:36:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.348 07:36:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:15.348 07:36:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.348 07:36:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:15.348 07:36:18 -- common/autotest_common.sh@10 -- # set +x 00:17:15.348 [2024-10-07 07:36:18.471188] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:15.348 [2024-10-07 07:36:18.471234] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.348 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.348 [2024-10-07 07:36:18.531523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.348 [2024-10-07 07:36:18.602763] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:15.348 [2024-10-07 07:36:18.602871] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.348 [2024-10-07 07:36:18.602879] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.348 [2024-10-07 07:36:18.602889] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.348 [2024-10-07 07:36:18.602910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.348 07:36:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:15.348 07:36:19 -- common/autotest_common.sh@852 -- # return 0 00:17:15.348 07:36:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:15.348 07:36:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:15.348 07:36:19 -- common/autotest_common.sh@10 -- # set +x 00:17:15.606 07:36:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.606 07:36:19 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:15.606 07:36:19 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:15.606 07:36:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.606 07:36:19 -- common/autotest_common.sh@10 -- # set +x 00:17:15.606 [2024-10-07 07:36:19.320741] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.606 07:36:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.606 07:36:19 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:15.606 07:36:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.606 07:36:19 -- common/autotest_common.sh@10 -- # set +x 00:17:15.606 07:36:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.606 07:36:19 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.606 07:36:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.606 07:36:19 -- common/autotest_common.sh@10 -- # set +x 00:17:15.607 [2024-10-07 07:36:19.336929] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.607 07:36:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.607 07:36:19 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:15.607 07:36:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.607 07:36:19 -- common/autotest_common.sh@10 -- # set +x 00:17:15.607 07:36:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.607 07:36:19 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:15.607 07:36:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.607 07:36:19 -- common/autotest_common.sh@10 -- # set +x 00:17:15.607 malloc0 00:17:15.607 07:36:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.607 07:36:19 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:15.607 07:36:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.607 07:36:19 -- common/autotest_common.sh@10 -- # set +x 00:17:15.607 07:36:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.607 07:36:19 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:15.607 07:36:19 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:15.607 07:36:19 -- nvmf/common.sh@520 -- # config=() 00:17:15.607 07:36:19 -- nvmf/common.sh@520 -- # local subsystem config 00:17:15.607 07:36:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:15.607 07:36:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:15.607 { 00:17:15.607 "params": { 00:17:15.607 "name": "Nvme$subsystem", 00:17:15.607 "trtype": "$TEST_TRANSPORT", 00:17:15.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.607 "adrfam": "ipv4", 00:17:15.607 "trsvcid": "$NVMF_PORT", 00:17:15.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.607 "hdgst": ${hdgst:-false}, 00:17:15.607 "ddgst": ${ddgst:-false} 00:17:15.607 }, 00:17:15.607 "method": "bdev_nvme_attach_controller" 00:17:15.607 } 00:17:15.607 EOF 00:17:15.607 )") 00:17:15.607 07:36:19 -- nvmf/common.sh@542 -- # cat 00:17:15.607 07:36:19 -- nvmf/common.sh@544 -- # jq . 00:17:15.607 07:36:19 -- nvmf/common.sh@545 -- # IFS=, 00:17:15.607 07:36:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:15.607 "params": { 00:17:15.607 "name": "Nvme1", 00:17:15.607 "trtype": "tcp", 00:17:15.607 "traddr": "10.0.0.2", 00:17:15.607 "adrfam": "ipv4", 00:17:15.607 "trsvcid": "4420", 00:17:15.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:15.607 "hdgst": false, 00:17:15.607 "ddgst": false 00:17:15.607 }, 00:17:15.607 "method": "bdev_nvme_attach_controller" 00:17:15.607 }' 00:17:15.607 [2024-10-07 07:36:19.410437] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:15.607 [2024-10-07 07:36:19.410480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4117115 ] 00:17:15.607 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.607 [2024-10-07 07:36:19.465618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.607 [2024-10-07 07:36:19.534433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.865 Running I/O for 10 seconds... 00:17:25.846 00:17:25.846 Latency(us) 00:17:25.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.846 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:25.846 Verification LBA range: start 0x0 length 0x1000 00:17:25.846 Nvme1n1 : 10.01 13325.75 104.11 0.00 0.00 9582.84 1014.25 18849.40 00:17:25.846 =================================================================================================================== 00:17:25.846 Total : 13325.75 104.11 0.00 0.00 9582.84 1014.25 18849.40 00:17:26.106 07:36:29 -- target/zcopy.sh@39 -- # perfpid=4118850 00:17:26.106 07:36:29 -- target/zcopy.sh@41 -- # xtrace_disable 00:17:26.106 07:36:29 -- common/autotest_common.sh@10 -- # set +x 00:17:26.106 07:36:29 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:26.106 07:36:29 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:26.106 07:36:29 -- nvmf/common.sh@520 -- # config=() 00:17:26.106 07:36:29 -- nvmf/common.sh@520 -- # local subsystem config 00:17:26.106 07:36:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:26.106 07:36:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:26.106 { 00:17:26.106 "params": { 00:17:26.106 "name": "Nvme$subsystem", 00:17:26.106 "trtype": "$TEST_TRANSPORT", 00:17:26.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.106 "adrfam": "ipv4", 00:17:26.106 "trsvcid": "$NVMF_PORT", 00:17:26.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.106 "hdgst": ${hdgst:-false}, 00:17:26.106 "ddgst": ${ddgst:-false} 00:17:26.106 }, 00:17:26.106 "method": "bdev_nvme_attach_controller" 00:17:26.106 } 00:17:26.106 EOF 00:17:26.106 )") 00:17:26.106 07:36:29 -- nvmf/common.sh@542 -- # cat 00:17:26.106 [2024-10-07 07:36:29.966449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.106 [2024-10-07 07:36:29.966486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.106 07:36:29 -- nvmf/common.sh@544 -- # jq . 00:17:26.106 07:36:29 -- nvmf/common.sh@545 -- # IFS=, 00:17:26.106 07:36:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:26.106 "params": { 00:17:26.106 "name": "Nvme1", 00:17:26.106 "trtype": "tcp", 00:17:26.106 "traddr": "10.0.0.2", 00:17:26.106 "adrfam": "ipv4", 00:17:26.106 "trsvcid": "4420", 00:17:26.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:26.106 "hdgst": false, 00:17:26.106 "ddgst": false 00:17:26.106 }, 00:17:26.106 "method": "bdev_nvme_attach_controller" 00:17:26.106 }' 00:17:26.106 [2024-10-07 07:36:29.974433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.106 [2024-10-07 07:36:29.974444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.106 [2024-10-07 07:36:29.982450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.106 [2024-10-07 07:36:29.982460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.106 [2024-10-07 07:36:29.987214] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:26.106 [2024-10-07 07:36:29.987254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4118850 ] 00:17:26.106 [2024-10-07 07:36:29.990469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.106 [2024-10-07 07:36:29.990479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.106 [2024-10-07 07:36:29.998493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.106 [2024-10-07 07:36:29.998507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.106 [2024-10-07 07:36:30.006517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.106 [2024-10-07 07:36:30.006528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.106 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.106 [2024-10-07 07:36:30.014589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.106 [2024-10-07 07:36:30.014621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.106 [2024-10-07 07:36:30.022564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.106 [2024-10-07 07:36:30.022578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.106 [2024-10-07 07:36:30.030581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.106 [2024-10-07 07:36:30.030600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.106 [2024-10-07 07:36:30.038600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.106 [2024-10-07 07:36:30.038610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.106 [2024-10-07 07:36:30.046222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.106 [2024-10-07 07:36:30.046640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.106 [2024-10-07 07:36:30.046661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.106 [2024-10-07 07:36:30.054651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.106 [2024-10-07 07:36:30.054667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.106 [2024-10-07 07:36:30.062671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.106 [2024-10-07 07:36:30.062686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.106 [2024-10-07 07:36:30.070690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.106 [2024-10-07 07:36:30.070702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.365 [2024-10-07 07:36:30.082727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.365 [2024-10-07 07:36:30.082740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.365 [2024-10-07 07:36:30.090799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.365 [2024-10-07 07:36:30.090834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.365 [2024-10-07 07:36:30.098769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.365 [2024-10-07 07:36:30.098781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.365 [2024-10-07 07:36:30.106787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.365 [2024-10-07 07:36:30.106796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.365 [2024-10-07 07:36:30.114806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.365 [2024-10-07 07:36:30.114814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.365 [2024-10-07 07:36:30.122826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.365 [2024-10-07 07:36:30.122835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.365 [2024-10-07 07:36:30.124220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.365 [2024-10-07 07:36:30.130847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.365 [2024-10-07 07:36:30.130857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.138880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.138898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.146898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.146912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.154917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.154930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.162939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.162951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.170958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.170969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.178982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.178995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.187004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.187016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.195023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.195032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.203064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.203082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.211075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.211088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.219096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.219108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.227139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.227153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.235137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.235151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.243158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.243169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.251179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.251188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.259199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.259208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.267223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.267231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.275245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.275254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.283271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.283284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.291287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.291297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.299309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.299319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.307332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.307341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.315352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.315361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.323374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.323385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.366 [2024-10-07 07:36:30.331397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.366 [2024-10-07 07:36:30.331409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.339419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.339427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.347441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.347451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.355464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.355473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.363485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.363494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.371507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.371517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.379528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.379537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.387555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.387572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 Running I/O for 5 seconds... 00:17:26.625 [2024-10-07 07:36:30.395573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.395583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.408504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.408525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.418988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.419006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.427324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.427343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.436740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.436759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.445371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.445389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.454044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.454071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.462454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.462472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.470806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.470824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.479636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.479654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.488427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.488446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.497367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.497385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.506537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.506555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.515549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.515568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.524977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.524995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.534011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.534029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.542864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.542882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.552096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.552114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.561055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.561079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.569490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.569508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.578818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.578836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.625 [2024-10-07 07:36:30.587453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.625 [2024-10-07 07:36:30.587471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.596261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.596279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.605233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.605252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.614055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.614079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.622287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.622310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.631411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.631430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.640661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.640679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.649341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.649358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.657744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.657763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.666639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.666659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.674941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.674960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.683066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.683084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.691248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.691266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.700442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.700460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.709201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.709219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.717902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.717920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.726609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.726627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.735377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.735395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.743720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.743738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.752826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.752845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.761889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.761907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.770524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.770542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.779381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.779399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.788197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.788221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.797111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.797131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.806001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.806020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.814459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.814483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.822909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.822927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.831462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.831480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.840618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.840635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.885 [2024-10-07 07:36:30.848624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.885 [2024-10-07 07:36:30.848642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.856939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.856956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.865217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.865235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.873964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.873983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.882377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.882395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.890414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.890431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.898927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.898945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.907484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.907502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.916526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.916544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.925816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.925834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.934110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.934128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.942828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.942846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.951129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.951150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.960140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.960158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.968573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.968591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.978128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.978146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.986639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.986657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:30.994825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:30.994842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:31.003216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:31.003234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:31.012226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:31.012243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:31.020374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:31.020392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:31.028819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:31.028837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:31.037028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:31.037045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:31.045833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:31.045850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:31.054490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:31.054508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:31.062889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:31.062906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:31.071382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:31.071399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:31.080111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:31.080128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:31.088778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:31.088796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:31.098011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:31.098030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.145 [2024-10-07 07:36:31.107177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.145 [2024-10-07 07:36:31.107195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.115647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.115670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.124083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.124101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.132622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.132642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.141503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.141520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.150239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.150257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.159023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.159042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.167752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.167771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.176879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.176897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.185548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.185566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.193591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.193609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.202414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.202432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.211245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.211263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.220548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.220567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.229112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.229130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.237896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.237914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.246632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.246650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.255484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.255502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.264425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.264442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.273205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.273223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.281948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.281966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.291183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.291201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.299574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.299593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.308378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.308397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.317661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.317680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.326224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.326243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.335172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.335191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.343954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.405 [2024-10-07 07:36:31.343973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.405 [2024-10-07 07:36:31.352165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.406 [2024-10-07 07:36:31.352183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.406 [2024-10-07 07:36:31.360317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.406 [2024-10-07 07:36:31.360336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.406 [2024-10-07 07:36:31.368476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.406 [2024-10-07 07:36:31.368493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.377447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.377466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.385608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.385626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.394440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.394459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.403090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.403108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.411714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.411732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.420688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.420707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.429422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.429440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.437902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.437920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.446683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.446701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.455709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.455727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.464671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.464690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.474075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.474110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.483232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.483251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.492352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.492370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.500453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.500471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.509257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.509275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.518303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.518322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.527036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.527054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.535690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.535708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.544853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.544871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.553513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.553531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.562391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.562409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.571166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.571183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.579976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.579993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.589308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.589326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.597786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.597803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.606789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.606807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.615131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.615149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.624353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.624371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.665 [2024-10-07 07:36:31.633810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.665 [2024-10-07 07:36:31.633828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.642319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.642337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.651478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.651496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.660100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.660119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.668854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.668872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.678073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.678092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.687378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.687397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.696208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.696226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.705157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.705175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.713903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.713921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.722496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.722514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.731748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.731767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.740339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.740357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.748571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.748588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.757416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.757433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.766048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.766071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.774835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.774853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.783102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.783119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.791649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.791667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.800334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.800352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.809036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.924 [2024-10-07 07:36:31.809054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.924 [2024-10-07 07:36:31.817455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.925 [2024-10-07 07:36:31.817472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.925 [2024-10-07 07:36:31.826482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.925 [2024-10-07 07:36:31.826500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.925 [2024-10-07 07:36:31.835550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.925 [2024-10-07 07:36:31.835568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.925 [2024-10-07 07:36:31.843973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.925 [2024-10-07 07:36:31.843991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.925 [2024-10-07 07:36:31.852681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.925 [2024-10-07 07:36:31.852699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.925 [2024-10-07 07:36:31.861582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.925 [2024-10-07 07:36:31.861601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.925 [2024-10-07 07:36:31.870493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.925 [2024-10-07 07:36:31.870511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.925 [2024-10-07 07:36:31.879521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.925 [2024-10-07 07:36:31.879539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.925 [2024-10-07 07:36:31.888353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.925 [2024-10-07 07:36:31.888372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.183 [2024-10-07 07:36:31.897098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.183 [2024-10-07 07:36:31.897116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.183 [2024-10-07 07:36:31.906108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.183 [2024-10-07 07:36:31.906125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.183 [2024-10-07 07:36:31.914524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.183 [2024-10-07 07:36:31.914542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.183 [2024-10-07 07:36:31.923279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.183 [2024-10-07 07:36:31.923297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.183 [2024-10-07 07:36:31.932638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.183 [2024-10-07 07:36:31.932656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.183 [2024-10-07 07:36:31.941068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.183 [2024-10-07 07:36:31.941107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.183 [2024-10-07 07:36:31.949802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.183 [2024-10-07 07:36:31.949821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.183 [2024-10-07 07:36:31.958259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.183 [2024-10-07 07:36:31.958278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.183 [2024-10-07 07:36:31.966985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.183 [2024-10-07 07:36:31.967003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:31.976024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:31.976041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:31.984304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:31.984322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:31.993445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:31.993463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.001728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.001746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.010584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.010602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.019898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.019916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.028265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.028282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.037031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.037049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.046018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.046036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.055187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.055205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.063561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.063579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.072251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.072269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.080844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.080861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.089910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.089928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.098326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.098344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.107095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.107117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.116037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.116055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.124341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.124359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.132837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.132854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.141782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.141800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.184 [2024-10-07 07:36:32.149774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.184 [2024-10-07 07:36:32.149791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.159156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.159173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.167935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.167952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.176039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.176056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.185249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.185267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.193188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.193205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.202349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.202367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.210673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.210690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.217161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.217178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.227750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.227769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.236428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.236446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.244841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.244859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.254012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.254030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.262481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.262499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.271115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.271137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.279016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.279035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.287512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.287530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.296088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.296106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.305220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.305238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.314290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.314308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.322873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.322891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.331868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.331886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.340293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.340311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.348936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.348954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.357987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.358005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.366446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.366464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.375625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.375643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.383982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.384000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.392723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.392742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.401614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.401632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.443 [2024-10-07 07:36:32.410473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.443 [2024-10-07 07:36:32.410492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.419611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.419629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.428221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.428239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.437029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.437051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.445749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.445767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.454501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.454519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.463092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.463110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.471351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.471368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.480154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.480173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.489231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.489252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.497937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.497956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.506272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.506290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.514783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.514802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.523704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.523724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.532244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.532263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.540839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.540858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.549774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.549792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.558645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.558663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.567008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.567026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.575575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.575593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.584618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.584636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.593402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.593421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.602156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.602176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.611374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.611393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.619756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.619774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.628977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.628996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.637588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.637607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.644092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.644111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.654268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.654287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.702 [2024-10-07 07:36:32.663152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.702 [2024-10-07 07:36:32.663170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.672300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.672318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.680705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.680723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.689640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.689659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.698868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.698888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.707890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.707908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.717171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.717190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.725875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.725894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.734585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.734604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.743329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.743347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.752390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.752408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.761274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.761292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.769965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.769982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.778957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.778975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.787246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.787263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.796197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.796224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.805057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.805081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.813690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.813708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.822557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.822575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.831350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.831368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.839872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.839890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.848789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.848807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.858033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.858052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.866903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.866920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.875581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.875600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.884449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.884467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.893145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.893162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.902201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.902219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.910913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.910931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.919573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.919590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.961 [2024-10-07 07:36:32.928109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.961 [2024-10-07 07:36:32.928126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:32.937074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:32.937092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:32.946254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:32.946272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:32.955069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:32.955086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:32.964116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:32.964136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:32.972999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:32.973017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:32.982205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:32.982223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:32.991476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:32.991494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:32.999663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:32.999681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.008555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.008572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.017840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.017857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.026687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.026705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.035490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.035509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.044129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.044147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.053356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.053375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.062052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.062076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.070233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.070251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.078757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.078774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.087329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.087347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.096206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.096224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.104907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.104925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.113521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.113539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.122887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.122905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.131382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.131400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.140787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.220 [2024-10-07 07:36:33.140805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.220 [2024-10-07 07:36:33.149064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.221 [2024-10-07 07:36:33.149082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.221 [2024-10-07 07:36:33.157738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.221 [2024-10-07 07:36:33.157756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.221 [2024-10-07 07:36:33.166915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.221 [2024-10-07 07:36:33.166932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.221 [2024-10-07 07:36:33.174986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.221 [2024-10-07 07:36:33.175003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.221 [2024-10-07 07:36:33.183485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.221 [2024-10-07 07:36:33.183503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.192127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.192145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.200970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.200987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.210077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.210095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.218336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.218353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.227321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.227339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.235487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.235505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.244327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.244345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.253333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.253351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.262313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.262335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.270375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.270393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.279069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.279087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.287825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.287843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.296451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.296469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.305010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.305028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.314176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.314194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.322249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.322267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.330897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.330914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.339982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.340000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.348663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.348680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.357008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.357025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.365505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.365522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.374181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.374199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.382326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.382344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.391118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.391136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.399852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.399870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.408538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.408556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.417774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.417792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.426334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.426356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.435447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.435465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.479 [2024-10-07 07:36:33.443741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.479 [2024-10-07 07:36:33.443758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.452683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.452701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.461397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.461415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.470130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.470148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.478979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.478997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.487706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.487724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.496793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.496811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.505323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.505341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.514242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.514261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.523155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.523173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.532072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.532089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.541298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.541316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.549709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.549726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.558983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.559001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.567018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.567036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.575920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.575938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.584665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.584683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.593332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.593354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.602141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.602159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.611205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.611223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.619804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.619821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.628460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.628478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.636756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.636773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.645485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.645503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.653846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.653864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.662939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.662957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.671651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.671670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.680485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.680503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.689291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.689309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.697632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.697650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.739 [2024-10-07 07:36:33.706479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.739 [2024-10-07 07:36:33.706497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.715230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.715248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.723853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.723871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.732376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.732394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.741157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.741177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.749808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.749826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.759105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.759127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.767962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.767980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.776805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.776823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.785601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.785619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.794843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.794862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.803139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.803157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.811736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.811754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.820812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.820830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.829517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.829535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.838192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.838210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.846626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.846644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.855099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.855117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.863562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.863579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.872164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.872181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.881285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.881306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.889381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.889400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.898261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.898279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.907255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.907275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.915921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.915939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.924448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.924467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.933160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.933178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.941361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.941379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.949572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.949590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.958334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.958351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.999 [2024-10-07 07:36:33.967053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.999 [2024-10-07 07:36:33.967078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.258 [2024-10-07 07:36:33.975914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.258 [2024-10-07 07:36:33.975932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.258 [2024-10-07 07:36:33.984495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.258 [2024-10-07 07:36:33.984513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.258 [2024-10-07 07:36:33.992570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:33.992587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.001150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.001168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.009854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.009872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.018963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.018980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.027756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.027773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.036448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.036466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.045886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.045904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.054054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.054079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.062827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.062845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.071699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.071718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.080701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.080719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.089351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.089369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.098156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.098175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.106237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.106255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.114925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.114943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.123926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.123944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.132360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.132378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.141770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.141788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.150207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.150225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.159544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.159562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.167961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.167980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.177036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.177055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.185795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.185813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.194751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.194769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.204122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.204140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.212484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.212503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.259 [2024-10-07 07:36:34.220621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.259 [2024-10-07 07:36:34.220639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.229023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.229041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.237848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.237866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.246540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.246558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.255347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.255365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.263956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.263975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.272693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.272711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.281678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.281697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.290199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.290217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.307643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.307661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.315854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.315872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.324642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.324659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.332902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.332919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.341484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.341502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.349705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.349723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.357969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.357987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.367296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.367314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.375720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.375738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.383808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.383827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.393190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.393207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.401476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.401493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.409896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.409914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.418055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.418077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.426708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.426726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.435149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.435167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.443729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.443747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.452840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.452858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.462173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.462192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.470753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.470770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.518 [2024-10-07 07:36:34.479866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.518 [2024-10-07 07:36:34.479883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.777 [2024-10-07 07:36:34.489013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.777 [2024-10-07 07:36:34.489031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.777 [2024-10-07 07:36:34.497783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.777 [2024-10-07 07:36:34.497801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.777 [2024-10-07 07:36:34.506304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.777 [2024-10-07 07:36:34.506322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.777 [2024-10-07 07:36:34.514911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.777 [2024-10-07 07:36:34.514929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.777 [2024-10-07 07:36:34.523471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.777 [2024-10-07 07:36:34.523489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.777 [2024-10-07 07:36:34.531741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.777 [2024-10-07 07:36:34.531758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.540630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.540648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.549103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.549121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.557769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.557786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.567230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.567249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.575478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.575496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.584081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.584103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.593264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.593282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.602049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.602076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.610630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.610648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.619696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.619713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.628626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.628644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.637340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.637358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.646037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.646054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.654199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.654217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.662580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.662598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.670881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.670898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.678987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.679006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.687270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.687289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.696008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.696026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.704676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.704694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.713401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.713418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.721654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.721671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.730287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.730304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.738420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.738437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.778 [2024-10-07 07:36:34.747490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.778 [2024-10-07 07:36:34.747513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.037 [2024-10-07 07:36:34.755716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.037 [2024-10-07 07:36:34.755734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.037 [2024-10-07 07:36:34.764389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.037 [2024-10-07 07:36:34.764409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.037 [2024-10-07 07:36:34.772722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.037 [2024-10-07 07:36:34.772741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.037 [2024-10-07 07:36:34.781302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.037 [2024-10-07 07:36:34.781320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.037 [2024-10-07 07:36:34.789361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.037 [2024-10-07 07:36:34.789378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.037 [2024-10-07 07:36:34.797387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.037 [2024-10-07 07:36:34.797405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.037 [2024-10-07 07:36:34.805731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.037 [2024-10-07 07:36:34.805750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.037 [2024-10-07 07:36:34.814721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.037 [2024-10-07 07:36:34.814739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.037 [2024-10-07 07:36:34.823244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.037 [2024-10-07 07:36:34.823262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.037 [2024-10-07 07:36:34.831906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.037 [2024-10-07 07:36:34.831923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.037 [2024-10-07 07:36:34.840423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.037 [2024-10-07 07:36:34.840441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.848750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.848767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.856871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.856889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.865483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.865500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.874305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.874323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.883229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.883247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.892057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.892081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.901476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.901494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.909777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.909798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.918617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.918635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.927529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.927547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.935620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.935638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.944634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.944652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.953334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.953352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.962153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.962170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.971035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.971052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.979594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.979611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.987956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.987973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:34.996445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:34.996462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.038 [2024-10-07 07:36:35.004729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.038 [2024-10-07 07:36:35.004747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.014260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.014277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.022813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.022831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.032156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.032174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.040324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.040342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.048557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.048575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.057502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.057520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.066220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.066238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.074943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.074964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.083702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.083720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.092994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.093012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.101198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.101216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.109935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.109952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.118872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.118890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.127690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.127708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.136736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.136754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.144951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.144968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.153734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.153752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.162650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.162668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.171518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.171535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.179820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.179838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.187989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.188006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.197289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.197308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.205593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.205611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.214183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.214200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.223030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.223048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.231424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.231441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.240151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.240169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.249358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.249376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.257603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.257621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.297 [2024-10-07 07:36:35.266869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.297 [2024-10-07 07:36:35.266887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.556 [2024-10-07 07:36:35.275596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.556 [2024-10-07 07:36:35.275614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.556 [2024-10-07 07:36:35.284489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.556 [2024-10-07 07:36:35.284508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.556 [2024-10-07 07:36:35.292727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.556 [2024-10-07 07:36:35.292745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.556 [2024-10-07 07:36:35.301231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.556 [2024-10-07 07:36:35.301249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.556 [2024-10-07 07:36:35.310174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.556 [2024-10-07 07:36:35.310193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.556 [2024-10-07 07:36:35.318932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.556 [2024-10-07 07:36:35.318952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.556 [2024-10-07 07:36:35.328252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.556 [2024-10-07 07:36:35.328270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.556 [2024-10-07 07:36:35.337159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.556 [2024-10-07 07:36:35.337177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.556 [2024-10-07 07:36:35.345817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.556 [2024-10-07 07:36:35.345835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.556 [2024-10-07 07:36:35.354085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.556 [2024-10-07 07:36:35.354103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.556 [2024-10-07 07:36:35.362616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.556 [2024-10-07 07:36:35.362634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.556 [2024-10-07 07:36:35.371056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.556 [2024-10-07 07:36:35.371082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.556 [2024-10-07 07:36:35.379958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.556 [2024-10-07 07:36:35.379977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.388715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.388734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.397256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.397274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.406125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.406144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.412035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.412053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 00:17:31.557 Latency(us) 00:17:31.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.557 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:31.557 Nvme1n1 : 5.01 17832.74 139.32 0.00 0.00 7172.25 2402.99 19473.55 00:17:31.557 =================================================================================================================== 00:17:31.557 Total : 17832.74 139.32 0.00 0.00 7172.25 2402.99 19473.55 00:17:31.557 [2024-10-07 07:36:35.420049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.420071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.428075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.428089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.436098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.436109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.444128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.444144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.452140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.452153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.460163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.460178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.468181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.468193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.476201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.476215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.484224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.484239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.492246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.492262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.500265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.500278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.508286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.508300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.516307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.516320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.557 [2024-10-07 07:36:35.524331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.557 [2024-10-07 07:36:35.524341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.817 [2024-10-07 07:36:35.532351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.817 [2024-10-07 07:36:35.532361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.817 [2024-10-07 07:36:35.540372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.817 [2024-10-07 07:36:35.540382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.817 [2024-10-07 07:36:35.548400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.817 [2024-10-07 07:36:35.548416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.817 [2024-10-07 07:36:35.556416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.817 [2024-10-07 07:36:35.556429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.817 [2024-10-07 07:36:35.564434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.817 [2024-10-07 07:36:35.564444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.817 [2024-10-07 07:36:35.572455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.817 [2024-10-07 07:36:35.572465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.817 [2024-10-07 07:36:35.580477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.817 [2024-10-07 07:36:35.580487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.817 [2024-10-07 07:36:35.588498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.817 [2024-10-07 07:36:35.588510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.817 [2024-10-07 07:36:35.596523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.817 [2024-10-07 07:36:35.596534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.817 [2024-10-07 07:36:35.604542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.817 [2024-10-07 07:36:35.604552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.817 [2024-10-07 07:36:35.612565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:31.817 [2024-10-07 07:36:35.612574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4118850) - No such process 00:17:31.817 07:36:35 -- target/zcopy.sh@49 -- # wait 4118850 00:17:31.817 07:36:35 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.817 07:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:31.817 07:36:35 -- common/autotest_common.sh@10 -- # set +x 00:17:31.817 07:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:31.817 07:36:35 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:31.817 07:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:31.817 07:36:35 -- common/autotest_common.sh@10 -- # set +x 00:17:31.817 delay0 00:17:31.817 07:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:31.817 07:36:35 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:31.817 07:36:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:31.817 07:36:35 -- common/autotest_common.sh@10 -- # set +x 00:17:31.817 07:36:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:31.817 07:36:35 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:31.817 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.817 [2024-10-07 07:36:35.722820] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:38.378 Initializing NVMe Controllers 00:17:38.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:38.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:38.378 Initialization complete. Launching workers. 00:17:38.378 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 83 00:17:38.378 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 363, failed to submit 40 00:17:38.378 success 157, unsuccess 206, failed 0 00:17:38.378 07:36:41 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:38.378 07:36:41 -- target/zcopy.sh@60 -- # nvmftestfini 00:17:38.378 07:36:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:38.378 07:36:41 -- nvmf/common.sh@116 -- # sync 00:17:38.378 07:36:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:38.378 07:36:41 -- nvmf/common.sh@119 -- # set +e 00:17:38.378 07:36:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:38.378 07:36:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:38.378 rmmod nvme_tcp 00:17:38.378 rmmod nvme_fabrics 00:17:38.378 rmmod nvme_keyring 00:17:38.378 07:36:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:38.378 07:36:41 -- nvmf/common.sh@123 -- # set -e 00:17:38.378 07:36:41 -- nvmf/common.sh@124 -- # return 0 00:17:38.378 07:36:41 -- nvmf/common.sh@477 -- # '[' -n 4116876 ']' 00:17:38.378 07:36:41 -- nvmf/common.sh@478 -- # killprocess 4116876 00:17:38.378 07:36:41 -- common/autotest_common.sh@926 -- # '[' -z 4116876 ']' 00:17:38.378 07:36:41 -- common/autotest_common.sh@930 -- # kill -0 4116876 00:17:38.378 07:36:41 -- common/autotest_common.sh@931 -- # uname 00:17:38.378 07:36:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:38.378 07:36:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4116876 00:17:38.378 07:36:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:38.378 07:36:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:38.378 07:36:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4116876' 00:17:38.378 killing process with pid 4116876 00:17:38.378 07:36:41 -- common/autotest_common.sh@945 -- # kill 4116876 00:17:38.378 07:36:41 -- common/autotest_common.sh@950 -- # wait 4116876 00:17:38.378 07:36:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:38.378 07:36:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:38.378 07:36:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:38.378 07:36:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.378 07:36:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:38.378 07:36:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.378 07:36:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.378 07:36:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.284 07:36:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:40.284 00:17:40.284 real 0m31.694s 00:17:40.284 user 0m42.065s 00:17:40.284 sys 0m10.757s 00:17:40.284 07:36:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.284 07:36:44 -- common/autotest_common.sh@10 -- # set +x 00:17:40.284 ************************************ 00:17:40.284 END TEST nvmf_zcopy 00:17:40.284 ************************************ 00:17:40.284 07:36:44 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:40.284 07:36:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:40.284 07:36:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:40.284 07:36:44 -- common/autotest_common.sh@10 -- # set +x 00:17:40.284 ************************************ 00:17:40.284 START TEST nvmf_nmic 00:17:40.284 ************************************ 00:17:40.284 07:36:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:40.542 * Looking for test storage... 00:17:40.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:40.542 07:36:44 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.542 07:36:44 -- nvmf/common.sh@7 -- # uname -s 00:17:40.542 07:36:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.542 07:36:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.542 07:36:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.542 07:36:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.542 07:36:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.542 07:36:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.542 07:36:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.542 07:36:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.542 07:36:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.542 07:36:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.542 07:36:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:40.542 07:36:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:40.542 07:36:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.542 07:36:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.542 07:36:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.542 07:36:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.542 07:36:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.542 07:36:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.542 07:36:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.542 07:36:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.542 07:36:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.542 07:36:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.542 07:36:44 -- paths/export.sh@5 -- # export PATH 00:17:40.542 07:36:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.542 07:36:44 -- nvmf/common.sh@46 -- # : 0 00:17:40.542 07:36:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:40.542 07:36:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:40.542 07:36:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:40.542 07:36:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.542 07:36:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.542 07:36:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:40.542 07:36:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:40.542 07:36:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:40.542 07:36:44 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:40.542 07:36:44 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:40.542 07:36:44 -- target/nmic.sh@14 -- # nvmftestinit 00:17:40.542 07:36:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:40.542 07:36:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.542 07:36:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:40.542 07:36:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:40.542 07:36:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:40.542 07:36:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.542 07:36:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.542 07:36:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.542 07:36:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:40.542 07:36:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:40.542 07:36:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:40.542 07:36:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.815 07:36:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:45.815 07:36:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:45.815 07:36:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:45.815 07:36:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:45.815 07:36:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:45.815 07:36:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:45.815 07:36:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:45.815 07:36:49 -- nvmf/common.sh@294 -- # net_devs=() 00:17:45.815 07:36:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:45.815 07:36:49 -- nvmf/common.sh@295 -- # e810=() 00:17:45.815 07:36:49 -- nvmf/common.sh@295 -- # local -ga e810 00:17:45.815 07:36:49 -- nvmf/common.sh@296 -- # x722=() 00:17:45.815 07:36:49 -- nvmf/common.sh@296 -- # local -ga x722 00:17:45.815 07:36:49 -- nvmf/common.sh@297 -- # mlx=() 00:17:45.815 07:36:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:45.815 07:36:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:45.815 07:36:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:45.815 07:36:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:45.815 07:36:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:45.815 07:36:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:45.815 07:36:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:45.815 07:36:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:45.815 07:36:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:45.815 07:36:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:45.815 07:36:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:45.815 07:36:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:45.815 07:36:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:45.815 07:36:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:45.815 07:36:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:45.815 07:36:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:45.815 07:36:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:45.815 07:36:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:45.815 07:36:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:45.815 07:36:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:45.815 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:45.815 07:36:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:45.815 07:36:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:45.816 07:36:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.816 07:36:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.816 07:36:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:45.816 07:36:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:45.816 07:36:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:45.816 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:45.816 07:36:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:45.816 07:36:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:45.816 07:36:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.816 07:36:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.816 07:36:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:45.816 07:36:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:45.816 07:36:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:45.816 07:36:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:45.816 07:36:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:45.816 07:36:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.816 07:36:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:45.816 07:36:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.816 07:36:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:45.816 Found net devices under 0000:af:00.0: cvl_0_0 00:17:45.816 07:36:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.816 07:36:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:45.816 07:36:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.816 07:36:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:45.816 07:36:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.816 07:36:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:45.816 Found net devices under 0000:af:00.1: cvl_0_1 00:17:45.816 07:36:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.816 07:36:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:45.816 07:36:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:45.816 07:36:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:45.816 07:36:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:45.816 07:36:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:45.816 07:36:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.816 07:36:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.816 07:36:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:45.816 07:36:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:45.816 07:36:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:45.816 07:36:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:45.816 07:36:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:45.816 07:36:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:45.816 07:36:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.816 07:36:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:45.816 07:36:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:45.816 07:36:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:45.816 07:36:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:45.816 07:36:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:45.816 07:36:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:45.816 07:36:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:45.816 07:36:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:45.816 07:36:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:45.816 07:36:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:45.816 07:36:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:45.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:17:45.816 00:17:45.816 --- 10.0.0.2 ping statistics --- 00:17:45.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.816 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:17:45.816 07:36:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:45.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:17:45.816 00:17:45.816 --- 10.0.0.1 ping statistics --- 00:17:45.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.816 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:17:45.816 07:36:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.816 07:36:49 -- nvmf/common.sh@410 -- # return 0 00:17:45.816 07:36:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:45.816 07:36:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.816 07:36:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:45.816 07:36:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:45.816 07:36:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.816 07:36:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:45.816 07:36:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:45.816 07:36:49 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:45.816 07:36:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:45.816 07:36:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:45.816 07:36:49 -- common/autotest_common.sh@10 -- # set +x 00:17:45.816 07:36:49 -- nvmf/common.sh@469 -- # nvmfpid=4124218 00:17:45.816 07:36:49 -- nvmf/common.sh@470 -- # waitforlisten 4124218 00:17:45.816 07:36:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:45.816 07:36:49 -- common/autotest_common.sh@819 -- # '[' -z 4124218 ']' 00:17:45.816 07:36:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.816 07:36:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:45.816 07:36:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.816 07:36:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:45.816 07:36:49 -- common/autotest_common.sh@10 -- # set +x 00:17:45.816 [2024-10-07 07:36:49.580876] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:45.816 [2024-10-07 07:36:49.580928] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.816 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.816 [2024-10-07 07:36:49.638516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:45.816 [2024-10-07 07:36:49.708900] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:45.816 [2024-10-07 07:36:49.709016] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.816 [2024-10-07 07:36:49.709024] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.816 [2024-10-07 07:36:49.709030] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.816 [2024-10-07 07:36:49.709078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.816 [2024-10-07 07:36:49.709102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.816 [2024-10-07 07:36:49.709172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:45.816 [2024-10-07 07:36:49.709174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.755 07:36:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:46.755 07:36:50 -- common/autotest_common.sh@852 -- # return 0 00:17:46.755 07:36:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:46.755 07:36:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:46.755 07:36:50 -- common/autotest_common.sh@10 -- # set +x 00:17:46.755 07:36:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.755 07:36:50 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:46.755 07:36:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:46.755 07:36:50 -- common/autotest_common.sh@10 -- # set +x 00:17:46.755 [2024-10-07 07:36:50.444428] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.755 07:36:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:46.755 07:36:50 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:46.755 07:36:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:46.755 07:36:50 -- common/autotest_common.sh@10 -- # set +x 00:17:46.755 Malloc0 00:17:46.755 07:36:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:46.755 07:36:50 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:46.755 07:36:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:46.755 07:36:50 -- common/autotest_common.sh@10 -- # set +x 00:17:46.755 07:36:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:46.755 07:36:50 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:46.755 07:36:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:46.755 07:36:50 -- common/autotest_common.sh@10 -- # set +x 00:17:46.755 07:36:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:46.755 07:36:50 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.755 07:36:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:46.755 07:36:50 -- common/autotest_common.sh@10 -- # set +x 00:17:46.755 [2024-10-07 07:36:50.496170] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.755 07:36:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:46.755 07:36:50 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:46.755 test case1: single bdev can't be used in multiple subsystems 00:17:46.755 07:36:50 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:46.755 07:36:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:46.755 07:36:50 -- common/autotest_common.sh@10 -- # set +x 00:17:46.755 07:36:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:46.755 07:36:50 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:46.755 07:36:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:46.755 07:36:50 -- common/autotest_common.sh@10 -- # set +x 00:17:46.755 07:36:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:46.755 07:36:50 -- target/nmic.sh@28 -- # nmic_status=0 00:17:46.755 07:36:50 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:46.755 07:36:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:46.755 07:36:50 -- common/autotest_common.sh@10 -- # set +x 00:17:46.755 [2024-10-07 07:36:50.520044] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:46.755 [2024-10-07 07:36:50.520067] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:46.755 [2024-10-07 07:36:50.520075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.755 request: 00:17:46.755 { 00:17:46.755 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:46.755 "namespace": { 00:17:46.755 "bdev_name": "Malloc0" 00:17:46.755 }, 00:17:46.755 "method": "nvmf_subsystem_add_ns", 00:17:46.755 "req_id": 1 00:17:46.755 } 00:17:46.755 Got JSON-RPC error response 00:17:46.755 response: 00:17:46.755 { 00:17:46.755 "code": -32602, 00:17:46.755 "message": "Invalid parameters" 00:17:46.755 } 00:17:46.755 07:36:50 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:17:46.755 07:36:50 -- target/nmic.sh@29 -- # nmic_status=1 00:17:46.755 07:36:50 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:46.755 07:36:50 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:46.755 Adding namespace failed - expected result. 00:17:46.755 07:36:50 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:46.755 test case2: host connect to nvmf target in multiple paths 00:17:46.756 07:36:50 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:46.756 07:36:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:46.756 07:36:50 -- common/autotest_common.sh@10 -- # set +x 00:17:46.756 [2024-10-07 07:36:50.532175] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:46.756 07:36:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:46.756 07:36:50 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:47.693 07:36:51 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:49.070 07:36:52 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:49.070 07:36:52 -- common/autotest_common.sh@1177 -- # local i=0 00:17:49.070 07:36:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:49.070 07:36:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:49.070 07:36:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:50.973 07:36:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:50.973 07:36:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:50.973 07:36:54 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:17:50.973 07:36:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:50.973 07:36:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.973 07:36:54 -- common/autotest_common.sh@1187 -- # return 0 00:17:50.973 07:36:54 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:50.973 [global] 00:17:50.973 thread=1 00:17:50.973 invalidate=1 00:17:50.973 rw=write 00:17:50.973 time_based=1 00:17:50.973 runtime=1 00:17:50.973 ioengine=libaio 00:17:50.973 direct=1 00:17:50.973 bs=4096 00:17:50.973 iodepth=1 00:17:50.973 norandommap=0 00:17:50.973 numjobs=1 00:17:50.973 00:17:50.973 verify_dump=1 00:17:50.973 verify_backlog=512 00:17:50.973 verify_state_save=0 00:17:50.973 do_verify=1 00:17:50.973 verify=crc32c-intel 00:17:50.973 [job0] 00:17:50.973 filename=/dev/nvme0n1 00:17:50.973 Could not set queue depth (nvme0n1) 00:17:51.233 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:51.233 fio-3.35 00:17:51.233 Starting 1 thread 00:17:52.612 00:17:52.612 job0: (groupid=0, jobs=1): err= 0: pid=4125285: Mon Oct 7 07:36:56 2024 00:17:52.612 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:17:52.612 slat (nsec): min=6991, max=41295, avg=8056.15, stdev=1456.13 00:17:52.612 clat (usec): min=309, max=389, avg=351.67, stdev=10.24 00:17:52.612 lat (usec): min=336, max=397, avg=359.73, stdev=10.20 00:17:52.612 clat percentiles (usec): 00:17:52.612 | 1.00th=[ 334], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 343], 00:17:52.612 | 30.00th=[ 347], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 351], 00:17:52.612 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[ 367], 95.00th=[ 375], 00:17:52.612 | 99.00th=[ 383], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:17:52.612 | 99.99th=[ 388] 00:17:52.612 write: IOPS=1979, BW=7916KiB/s (8106kB/s)(7924KiB/1001msec); 0 zone resets 00:17:52.612 slat (usec): min=10, max=25494, avg=24.79, stdev=572.53 00:17:52.612 clat (usec): min=170, max=384, avg=195.63, stdev=15.17 00:17:52.612 lat (usec): min=181, max=25846, avg=220.42, stdev=576.25 00:17:52.612 clat percentiles (usec): 00:17:52.612 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 180], 00:17:52.612 | 30.00th=[ 186], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 200], 00:17:52.612 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 210], 95.00th=[ 215], 00:17:52.612 | 99.00th=[ 225], 99.50th=[ 269], 99.90th=[ 355], 99.95th=[ 383], 00:17:52.612 | 99.99th=[ 383] 00:17:52.612 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:17:52.612 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:52.612 lat (usec) : 250=55.90%, 500=44.10% 00:17:52.612 cpu : usr=3.40%, sys=5.30%, ctx=3520, majf=0, minf=1 00:17:52.612 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.612 issued rwts: total=1536,1981,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.612 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.612 00:17:52.612 Run status group 0 (all jobs): 00:17:52.612 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:17:52.612 WRITE: bw=7916KiB/s (8106kB/s), 7916KiB/s-7916KiB/s (8106kB/s-8106kB/s), io=7924KiB (8114kB), run=1001-1001msec 00:17:52.612 00:17:52.612 Disk stats (read/write): 00:17:52.612 nvme0n1: ios=1543/1536, merge=0/0, ticks=1513/285, in_queue=1798, util=98.60% 00:17:52.612 07:36:56 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:52.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:52.612 07:36:56 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:52.612 07:36:56 -- common/autotest_common.sh@1198 -- # local i=0 00:17:52.612 07:36:56 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:17:52.612 07:36:56 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:52.612 07:36:56 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:52.612 07:36:56 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:52.612 07:36:56 -- common/autotest_common.sh@1210 -- # return 0 00:17:52.612 07:36:56 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:52.612 07:36:56 -- target/nmic.sh@53 -- # nvmftestfini 00:17:52.612 07:36:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:52.612 07:36:56 -- nvmf/common.sh@116 -- # sync 00:17:52.612 07:36:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:52.612 07:36:56 -- nvmf/common.sh@119 -- # set +e 00:17:52.612 07:36:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:52.612 07:36:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:52.612 rmmod nvme_tcp 00:17:52.612 rmmod nvme_fabrics 00:17:52.872 rmmod nvme_keyring 00:17:52.872 07:36:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:52.872 07:36:56 -- nvmf/common.sh@123 -- # set -e 00:17:52.872 07:36:56 -- nvmf/common.sh@124 -- # return 0 00:17:52.872 07:36:56 -- nvmf/common.sh@477 -- # '[' -n 4124218 ']' 00:17:52.872 07:36:56 -- nvmf/common.sh@478 -- # killprocess 4124218 00:17:52.872 07:36:56 -- common/autotest_common.sh@926 -- # '[' -z 4124218 ']' 00:17:52.872 07:36:56 -- common/autotest_common.sh@930 -- # kill -0 4124218 00:17:52.872 07:36:56 -- common/autotest_common.sh@931 -- # uname 00:17:52.872 07:36:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:52.872 07:36:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4124218 00:17:52.872 07:36:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:52.872 07:36:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:52.872 07:36:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4124218' 00:17:52.872 killing process with pid 4124218 00:17:52.872 07:36:56 -- common/autotest_common.sh@945 -- # kill 4124218 00:17:52.872 07:36:56 -- common/autotest_common.sh@950 -- # wait 4124218 00:17:53.132 07:36:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:53.132 07:36:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:53.132 07:36:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:53.132 07:36:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.132 07:36:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:53.132 07:36:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.132 07:36:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.132 07:36:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.037 07:36:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:55.037 00:17:55.037 real 0m14.724s 00:17:55.037 user 0m35.665s 00:17:55.037 sys 0m4.680s 00:17:55.037 07:36:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.037 07:36:58 -- common/autotest_common.sh@10 -- # set +x 00:17:55.037 ************************************ 00:17:55.037 END TEST nvmf_nmic 00:17:55.037 ************************************ 00:17:55.295 07:36:59 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:55.295 07:36:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:55.295 07:36:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:55.295 07:36:59 -- common/autotest_common.sh@10 -- # set +x 00:17:55.295 ************************************ 00:17:55.295 START TEST nvmf_fio_target 00:17:55.295 ************************************ 00:17:55.295 07:36:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:55.295 * Looking for test storage... 00:17:55.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.295 07:36:59 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.295 07:36:59 -- nvmf/common.sh@7 -- # uname -s 00:17:55.295 07:36:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.295 07:36:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.295 07:36:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.295 07:36:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.295 07:36:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.295 07:36:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.295 07:36:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.295 07:36:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.295 07:36:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.295 07:36:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.295 07:36:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:55.295 07:36:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:55.295 07:36:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.295 07:36:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.295 07:36:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.295 07:36:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.295 07:36:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.295 07:36:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.295 07:36:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.295 07:36:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.295 07:36:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.296 07:36:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.296 07:36:59 -- paths/export.sh@5 -- # export PATH 00:17:55.296 07:36:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.296 07:36:59 -- nvmf/common.sh@46 -- # : 0 00:17:55.296 07:36:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:55.296 07:36:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:55.296 07:36:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:55.296 07:36:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.296 07:36:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.296 07:36:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:55.296 07:36:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:55.296 07:36:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:55.296 07:36:59 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:55.296 07:36:59 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:55.296 07:36:59 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.296 07:36:59 -- target/fio.sh@16 -- # nvmftestinit 00:17:55.296 07:36:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:55.296 07:36:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.296 07:36:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:55.296 07:36:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:55.296 07:36:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:55.296 07:36:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.296 07:36:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.296 07:36:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.296 07:36:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:55.296 07:36:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:55.296 07:36:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:55.296 07:36:59 -- common/autotest_common.sh@10 -- # set +x 00:18:00.565 07:37:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:00.565 07:37:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:00.565 07:37:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:00.565 07:37:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:00.565 07:37:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:00.565 07:37:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:00.565 07:37:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:00.565 07:37:03 -- nvmf/common.sh@294 -- # net_devs=() 00:18:00.565 07:37:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:00.565 07:37:03 -- nvmf/common.sh@295 -- # e810=() 00:18:00.565 07:37:03 -- nvmf/common.sh@295 -- # local -ga e810 00:18:00.565 07:37:03 -- nvmf/common.sh@296 -- # x722=() 00:18:00.565 07:37:03 -- nvmf/common.sh@296 -- # local -ga x722 00:18:00.565 07:37:03 -- nvmf/common.sh@297 -- # mlx=() 00:18:00.565 07:37:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:00.565 07:37:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.565 07:37:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.565 07:37:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.565 07:37:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.565 07:37:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.565 07:37:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.565 07:37:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.565 07:37:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.565 07:37:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.565 07:37:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.565 07:37:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.565 07:37:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:00.565 07:37:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:00.565 07:37:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:00.565 07:37:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:00.565 07:37:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:00.565 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:00.565 07:37:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:00.565 07:37:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:00.565 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:00.565 07:37:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:00.565 07:37:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:00.565 07:37:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.565 07:37:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:00.565 07:37:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.565 07:37:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:00.565 Found net devices under 0000:af:00.0: cvl_0_0 00:18:00.565 07:37:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.565 07:37:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:00.565 07:37:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.565 07:37:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:00.565 07:37:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.565 07:37:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:00.565 Found net devices under 0000:af:00.1: cvl_0_1 00:18:00.565 07:37:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.565 07:37:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:00.565 07:37:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:00.565 07:37:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:00.565 07:37:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:00.565 07:37:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.565 07:37:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.565 07:37:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.565 07:37:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:00.565 07:37:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.565 07:37:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.565 07:37:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:00.565 07:37:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.565 07:37:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.565 07:37:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:00.565 07:37:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:00.565 07:37:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.565 07:37:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.565 07:37:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.565 07:37:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.565 07:37:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:00.565 07:37:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.565 07:37:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.565 07:37:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.565 07:37:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:00.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:18:00.565 00:18:00.565 --- 10.0.0.2 ping statistics --- 00:18:00.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.565 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:18:00.565 07:37:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:18:00.565 00:18:00.565 --- 10.0.0.1 ping statistics --- 00:18:00.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.565 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:18:00.565 07:37:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.565 07:37:04 -- nvmf/common.sh@410 -- # return 0 00:18:00.565 07:37:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:00.565 07:37:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.565 07:37:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:00.565 07:37:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:00.565 07:37:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.565 07:37:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:00.565 07:37:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:00.565 07:37:04 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:00.565 07:37:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:00.565 07:37:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:00.565 07:37:04 -- common/autotest_common.sh@10 -- # set +x 00:18:00.565 07:37:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:00.565 07:37:04 -- nvmf/common.sh@469 -- # nvmfpid=4128772 00:18:00.565 07:37:04 -- nvmf/common.sh@470 -- # waitforlisten 4128772 00:18:00.565 07:37:04 -- common/autotest_common.sh@819 -- # '[' -z 4128772 ']' 00:18:00.565 07:37:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.565 07:37:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:00.565 07:37:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.565 07:37:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:00.565 07:37:04 -- common/autotest_common.sh@10 -- # set +x 00:18:00.565 [2024-10-07 07:37:04.132992] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:00.565 [2024-10-07 07:37:04.133032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.565 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.565 [2024-10-07 07:37:04.190875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:00.565 [2024-10-07 07:37:04.265874] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:00.565 [2024-10-07 07:37:04.265984] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.565 [2024-10-07 07:37:04.265992] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.565 [2024-10-07 07:37:04.265998] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.565 [2024-10-07 07:37:04.266049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.565 [2024-10-07 07:37:04.266148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.565 [2024-10-07 07:37:04.266171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:00.565 [2024-10-07 07:37:04.266173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.132 07:37:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:01.132 07:37:04 -- common/autotest_common.sh@852 -- # return 0 00:18:01.132 07:37:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:01.132 07:37:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:01.132 07:37:04 -- common/autotest_common.sh@10 -- # set +x 00:18:01.132 07:37:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.132 07:37:05 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:01.391 [2024-10-07 07:37:05.166025] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.391 07:37:05 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:01.650 07:37:05 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:01.650 07:37:05 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:01.650 07:37:05 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:01.650 07:37:05 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:01.908 07:37:05 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:01.908 07:37:05 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:02.167 07:37:05 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:02.167 07:37:05 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:02.425 07:37:06 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:02.425 07:37:06 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:02.684 07:37:06 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:02.684 07:37:06 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:02.684 07:37:06 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:02.943 07:37:06 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:02.943 07:37:06 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:03.201 07:37:06 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:03.460 07:37:07 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:03.460 07:37:07 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:03.460 07:37:07 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:03.460 07:37:07 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:03.718 07:37:07 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.976 [2024-10-07 07:37:07.735487] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.976 07:37:07 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:04.234 07:37:07 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:04.234 07:37:08 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:05.714 07:37:09 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:05.714 07:37:09 -- common/autotest_common.sh@1177 -- # local i=0 00:18:05.714 07:37:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:05.714 07:37:09 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:18:05.714 07:37:09 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:18:05.714 07:37:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:07.635 07:37:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:07.635 07:37:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:07.635 07:37:11 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:07.635 07:37:11 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:18:07.635 07:37:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:07.635 07:37:11 -- common/autotest_common.sh@1187 -- # return 0 00:18:07.635 07:37:11 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:07.635 [global] 00:18:07.635 thread=1 00:18:07.635 invalidate=1 00:18:07.635 rw=write 00:18:07.635 time_based=1 00:18:07.635 runtime=1 00:18:07.635 ioengine=libaio 00:18:07.635 direct=1 00:18:07.635 bs=4096 00:18:07.635 iodepth=1 00:18:07.635 norandommap=0 00:18:07.635 numjobs=1 00:18:07.635 00:18:07.635 verify_dump=1 00:18:07.635 verify_backlog=512 00:18:07.635 verify_state_save=0 00:18:07.635 do_verify=1 00:18:07.635 verify=crc32c-intel 00:18:07.635 [job0] 00:18:07.635 filename=/dev/nvme0n1 00:18:07.635 [job1] 00:18:07.635 filename=/dev/nvme0n2 00:18:07.635 [job2] 00:18:07.635 filename=/dev/nvme0n3 00:18:07.635 [job3] 00:18:07.635 filename=/dev/nvme0n4 00:18:07.635 Could not set queue depth (nvme0n1) 00:18:07.635 Could not set queue depth (nvme0n2) 00:18:07.635 Could not set queue depth (nvme0n3) 00:18:07.635 Could not set queue depth (nvme0n4) 00:18:07.894 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:07.894 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:07.894 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:07.894 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:07.894 fio-3.35 00:18:07.894 Starting 4 threads 00:18:09.281 00:18:09.281 job0: (groupid=0, jobs=1): err= 0: pid=4130305: Mon Oct 7 07:37:12 2024 00:18:09.281 read: IOPS=1889, BW=7556KiB/s (7738kB/s)(7564KiB/1001msec) 00:18:09.281 slat (nsec): min=7062, max=31399, avg=8069.35, stdev=1453.38 00:18:09.281 clat (usec): min=261, max=493, avg=289.45, stdev=15.44 00:18:09.281 lat (usec): min=270, max=501, avg=297.52, stdev=15.56 00:18:09.281 clat percentiles (usec): 00:18:09.281 | 1.00th=[ 265], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 277], 00:18:09.281 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:18:09.281 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 314], 00:18:09.281 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 445], 99.95th=[ 494], 00:18:09.281 | 99.99th=[ 494] 00:18:09.281 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:09.281 slat (nsec): min=10575, max=38355, avg=11850.00, stdev=1551.62 00:18:09.281 clat (usec): min=171, max=1241, avg=195.70, stdev=31.09 00:18:09.281 lat (usec): min=182, max=1251, avg=207.55, stdev=31.34 00:18:09.281 clat percentiles (usec): 00:18:09.281 | 1.00th=[ 174], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:18:09.281 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 192], 00:18:09.281 | 70.00th=[ 198], 80.00th=[ 208], 90.00th=[ 223], 95.00th=[ 243], 00:18:09.281 | 99.00th=[ 265], 99.50th=[ 289], 99.90th=[ 322], 99.95th=[ 388], 00:18:09.281 | 99.99th=[ 1237] 00:18:09.281 bw ( KiB/s): min= 8192, max= 8192, per=58.86%, avg=8192.00, stdev= 0.00, samples=1 00:18:09.281 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:09.281 lat (usec) : 250=50.60%, 500=49.38% 00:18:09.281 lat (msec) : 2=0.03% 00:18:09.281 cpu : usr=3.40%, sys=6.20%, ctx=3944, majf=0, minf=1 00:18:09.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:09.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.281 issued rwts: total=1891,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:09.281 job1: (groupid=0, jobs=1): err= 0: pid=4130321: Mon Oct 7 07:37:12 2024 00:18:09.281 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:18:09.281 slat (nsec): min=10014, max=27046, avg=19949.36, stdev=4544.12 00:18:09.281 clat (usec): min=40881, max=41256, avg=40984.80, stdev=95.94 00:18:09.281 lat (usec): min=40902, max=41266, avg=41004.75, stdev=94.03 00:18:09.281 clat percentiles (usec): 00:18:09.281 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:09.281 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:09.281 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:09.281 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:09.281 | 99.99th=[41157] 00:18:09.281 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:18:09.281 slat (nsec): min=10082, max=63671, avg=12624.37, stdev=3460.47 00:18:09.281 clat (usec): min=188, max=369, avg=225.15, stdev=21.81 00:18:09.281 lat (usec): min=202, max=407, avg=237.78, stdev=23.10 00:18:09.281 clat percentiles (usec): 00:18:09.281 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:18:09.281 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:18:09.281 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 265], 00:18:09.281 | 99.00th=[ 293], 99.50th=[ 343], 99.90th=[ 371], 99.95th=[ 371], 00:18:09.281 | 99.99th=[ 371] 00:18:09.281 bw ( KiB/s): min= 4096, max= 4096, per=29.43%, avg=4096.00, stdev= 0.00, samples=1 00:18:09.281 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:09.281 lat (usec) : 250=84.46%, 500=11.42% 00:18:09.281 lat (msec) : 50=4.12% 00:18:09.281 cpu : usr=0.39%, sys=0.88%, ctx=535, majf=0, minf=2 00:18:09.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:09.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.281 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:09.281 job2: (groupid=0, jobs=1): err= 0: pid=4130339: Mon Oct 7 07:37:12 2024 00:18:09.281 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:18:09.281 slat (nsec): min=9919, max=23764, avg=22606.95, stdev=2860.07 00:18:09.281 clat (usec): min=40791, max=41036, avg=40958.42, stdev=54.94 00:18:09.281 lat (usec): min=40814, max=41058, avg=40981.02, stdev=55.43 00:18:09.281 clat percentiles (usec): 00:18:09.281 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:09.281 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:09.281 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:09.281 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:09.281 | 99.99th=[41157] 00:18:09.281 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:18:09.281 slat (nsec): min=9656, max=39311, avg=11636.30, stdev=2465.62 00:18:09.281 clat (usec): min=165, max=315, avg=219.63, stdev=25.50 00:18:09.281 lat (usec): min=177, max=329, avg=231.27, stdev=26.33 00:18:09.281 clat percentiles (usec): 00:18:09.281 | 1.00th=[ 169], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 198], 00:18:09.281 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 225], 00:18:09.281 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 265], 00:18:09.281 | 99.00th=[ 281], 99.50th=[ 302], 99.90th=[ 318], 99.95th=[ 318], 00:18:09.281 | 99.99th=[ 318] 00:18:09.281 bw ( KiB/s): min= 4096, max= 4096, per=29.43%, avg=4096.00, stdev= 0.00, samples=1 00:18:09.281 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:09.281 lat (usec) : 250=85.21%, 500=10.67% 00:18:09.281 lat (msec) : 50=4.12% 00:18:09.281 cpu : usr=0.10%, sys=0.69%, ctx=534, majf=0, minf=2 00:18:09.281 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:09.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.281 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.281 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:09.281 job3: (groupid=0, jobs=1): err= 0: pid=4130342: Mon Oct 7 07:37:12 2024 00:18:09.281 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:18:09.281 slat (nsec): min=10538, max=24573, avg=22774.73, stdev=2782.15 00:18:09.281 clat (usec): min=40880, max=41416, avg=40988.17, stdev=104.70 00:18:09.281 lat (usec): min=40904, max=41427, avg=41010.95, stdev=102.16 00:18:09.281 clat percentiles (usec): 00:18:09.281 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:09.281 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:09.281 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:09.281 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:09.281 | 99.99th=[41157] 00:18:09.281 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:18:09.281 slat (nsec): min=11075, max=38231, avg=13146.40, stdev=2439.42 00:18:09.281 clat (usec): min=185, max=465, avg=231.33, stdev=26.75 00:18:09.281 lat (usec): min=196, max=476, avg=244.47, stdev=27.13 00:18:09.281 clat percentiles (usec): 00:18:09.281 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:18:09.281 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 233], 00:18:09.281 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:18:09.281 | 99.00th=[ 314], 99.50th=[ 343], 99.90th=[ 465], 99.95th=[ 465], 00:18:09.282 | 99.99th=[ 465] 00:18:09.282 bw ( KiB/s): min= 4096, max= 4096, per=29.43%, avg=4096.00, stdev= 0.00, samples=1 00:18:09.282 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:09.282 lat (usec) : 250=76.22%, 500=19.66% 00:18:09.282 lat (msec) : 50=4.12% 00:18:09.282 cpu : usr=0.19%, sys=1.17%, ctx=536, majf=0, minf=1 00:18:09.282 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:09.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:09.282 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:09.282 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:09.282 00:18:09.282 Run status group 0 (all jobs): 00:18:09.282 READ: bw=7600KiB/s (7782kB/s), 85.4KiB/s-7556KiB/s (87.5kB/s-7738kB/s), io=7828KiB (8016kB), run=1001-1030msec 00:18:09.282 WRITE: bw=13.6MiB/s (14.3MB/s), 1988KiB/s-8184KiB/s (2036kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1030msec 00:18:09.282 00:18:09.282 Disk stats (read/write): 00:18:09.282 nvme0n1: ios=1562/1806, merge=0/0, ticks=1414/337, in_queue=1751, util=97.49% 00:18:09.282 nvme0n2: ios=39/512, merge=0/0, ticks=723/105, in_queue=828, util=87.16% 00:18:09.282 nvme0n3: ios=17/512, merge=0/0, ticks=697/109, in_queue=806, util=88.78% 00:18:09.282 nvme0n4: ios=40/512, merge=0/0, ticks=1641/110, in_queue=1751, util=97.78% 00:18:09.282 07:37:12 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:09.282 [global] 00:18:09.282 thread=1 00:18:09.282 invalidate=1 00:18:09.282 rw=randwrite 00:18:09.282 time_based=1 00:18:09.282 runtime=1 00:18:09.282 ioengine=libaio 00:18:09.282 direct=1 00:18:09.282 bs=4096 00:18:09.282 iodepth=1 00:18:09.282 norandommap=0 00:18:09.282 numjobs=1 00:18:09.282 00:18:09.282 verify_dump=1 00:18:09.282 verify_backlog=512 00:18:09.282 verify_state_save=0 00:18:09.282 do_verify=1 00:18:09.282 verify=crc32c-intel 00:18:09.282 [job0] 00:18:09.282 filename=/dev/nvme0n1 00:18:09.282 [job1] 00:18:09.282 filename=/dev/nvme0n2 00:18:09.282 [job2] 00:18:09.282 filename=/dev/nvme0n3 00:18:09.282 [job3] 00:18:09.282 filename=/dev/nvme0n4 00:18:09.282 Could not set queue depth (nvme0n1) 00:18:09.282 Could not set queue depth (nvme0n2) 00:18:09.282 Could not set queue depth (nvme0n3) 00:18:09.282 Could not set queue depth (nvme0n4) 00:18:09.541 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:09.541 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:09.541 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:09.541 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:09.541 fio-3.35 00:18:09.541 Starting 4 threads 00:18:10.911 00:18:10.911 job0: (groupid=0, jobs=1): err= 0: pid=4130705: Mon Oct 7 07:37:14 2024 00:18:10.911 read: IOPS=21, BW=84.9KiB/s (87.0kB/s)(88.0KiB/1036msec) 00:18:10.911 slat (nsec): min=9588, max=33652, avg=20683.14, stdev=5950.06 00:18:10.911 clat (usec): min=40803, max=42108, avg=41309.20, stdev=471.78 00:18:10.911 lat (usec): min=40825, max=42124, avg=41329.88, stdev=472.41 00:18:10.911 clat percentiles (usec): 00:18:10.911 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:10.911 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:10.911 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:18:10.911 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:10.911 | 99.99th=[42206] 00:18:10.911 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:18:10.911 slat (nsec): min=9698, max=42273, avg=11727.51, stdev=2572.78 00:18:10.911 clat (usec): min=186, max=361, avg=232.38, stdev=21.23 00:18:10.911 lat (usec): min=198, max=404, avg=244.11, stdev=21.61 00:18:10.911 clat percentiles (usec): 00:18:10.911 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 215], 00:18:10.911 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 237], 00:18:10.911 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 269], 00:18:10.911 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 363], 99.95th=[ 363], 00:18:10.911 | 99.99th=[ 363] 00:18:10.911 bw ( KiB/s): min= 4096, max= 4096, per=23.05%, avg=4096.00, stdev= 0.00, samples=1 00:18:10.911 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:10.911 lat (usec) : 250=78.09%, 500=17.79% 00:18:10.911 lat (msec) : 50=4.12% 00:18:10.911 cpu : usr=0.58%, sys=0.48%, ctx=536, majf=0, minf=1 00:18:10.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:10.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.911 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:10.911 job1: (groupid=0, jobs=1): err= 0: pid=4130706: Mon Oct 7 07:37:14 2024 00:18:10.911 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:10.911 slat (nsec): min=6995, max=36309, avg=8164.94, stdev=1712.32 00:18:10.911 clat (usec): min=258, max=2554, avg=320.71, stdev=64.57 00:18:10.911 lat (usec): min=266, max=2563, avg=328.87, stdev=64.63 00:18:10.911 clat percentiles (usec): 00:18:10.911 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:18:10.911 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 318], 00:18:10.911 | 70.00th=[ 322], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 371], 00:18:10.911 | 99.00th=[ 429], 99.50th=[ 437], 99.90th=[ 955], 99.95th=[ 2540], 00:18:10.911 | 99.99th=[ 2540] 00:18:10.911 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8184KiB/1001msec); 0 zone resets 00:18:10.911 slat (usec): min=9, max=35787, avg=29.08, stdev=790.93 00:18:10.911 clat (usec): min=161, max=409, avg=207.46, stdev=21.86 00:18:10.911 lat (usec): min=172, max=36131, avg=236.54, stdev=794.28 00:18:10.911 clat percentiles (usec): 00:18:10.911 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:18:10.911 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:18:10.911 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 247], 00:18:10.911 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 343], 99.95th=[ 347], 00:18:10.911 | 99.99th=[ 412] 00:18:10.911 bw ( KiB/s): min= 8192, max= 8192, per=46.11%, avg=8192.00, stdev= 0.00, samples=1 00:18:10.911 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:10.911 lat (usec) : 250=54.89%, 500=45.03%, 750=0.03%, 1000=0.03% 00:18:10.911 lat (msec) : 4=0.03% 00:18:10.911 cpu : usr=3.60%, sys=5.00%, ctx=3585, majf=0, minf=1 00:18:10.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:10.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.911 issued rwts: total=1536,2046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:10.911 job2: (groupid=0, jobs=1): err= 0: pid=4130707: Mon Oct 7 07:37:14 2024 00:18:10.911 read: IOPS=21, BW=84.9KiB/s (86.9kB/s)(88.0KiB/1037msec) 00:18:10.912 slat (nsec): min=7879, max=23505, avg=13454.05, stdev=5435.32 00:18:10.912 clat (usec): min=40859, max=41993, avg=41083.03, stdev=287.06 00:18:10.912 lat (usec): min=40880, max=42002, avg=41096.49, stdev=287.74 00:18:10.912 clat percentiles (usec): 00:18:10.912 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:10.912 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:10.912 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:18:10.912 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:10.912 | 99.99th=[42206] 00:18:10.912 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:18:10.912 slat (nsec): min=9995, max=38734, avg=11295.06, stdev=1789.42 00:18:10.912 clat (usec): min=192, max=422, avg=245.27, stdev=11.97 00:18:10.912 lat (usec): min=204, max=461, avg=256.56, stdev=12.85 00:18:10.912 clat percentiles (usec): 00:18:10.912 | 1.00th=[ 204], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 241], 00:18:10.912 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 245], 00:18:10.912 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 253], 95.00th=[ 255], 00:18:10.912 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 424], 99.95th=[ 424], 00:18:10.912 | 99.99th=[ 424] 00:18:10.912 bw ( KiB/s): min= 4096, max= 4096, per=23.05%, avg=4096.00, stdev= 0.00, samples=1 00:18:10.912 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:10.912 lat (usec) : 250=76.59%, 500=19.29% 00:18:10.912 lat (msec) : 50=4.12% 00:18:10.912 cpu : usr=0.00%, sys=0.77%, ctx=535, majf=0, minf=1 00:18:10.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:10.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.912 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:10.912 job3: (groupid=0, jobs=1): err= 0: pid=4130708: Mon Oct 7 07:37:14 2024 00:18:10.912 read: IOPS=995, BW=3981KiB/s (4076kB/s)(4120KiB/1035msec) 00:18:10.912 slat (nsec): min=6909, max=25222, avg=8017.45, stdev=1443.18 00:18:10.912 clat (usec): min=342, max=41298, avg=622.05, stdev=3094.55 00:18:10.912 lat (usec): min=350, max=41308, avg=630.07, stdev=3095.58 00:18:10.912 clat percentiles (usec): 00:18:10.912 | 1.00th=[ 355], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 371], 00:18:10.912 | 30.00th=[ 375], 40.00th=[ 379], 50.00th=[ 383], 60.00th=[ 383], 00:18:10.912 | 70.00th=[ 388], 80.00th=[ 392], 90.00th=[ 404], 95.00th=[ 453], 00:18:10.912 | 99.00th=[ 494], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:10.912 | 99.99th=[41157] 00:18:10.912 write: IOPS=1484, BW=5936KiB/s (6079kB/s)(6144KiB/1035msec); 0 zone resets 00:18:10.912 slat (nsec): min=9634, max=44184, avg=11151.02, stdev=2256.92 00:18:10.912 clat (usec): min=173, max=632, avg=235.09, stdev=41.36 00:18:10.912 lat (usec): min=183, max=643, avg=246.24, stdev=41.51 00:18:10.912 clat percentiles (usec): 00:18:10.912 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:18:10.912 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 227], 60.00th=[ 237], 00:18:10.912 | 70.00th=[ 247], 80.00th=[ 260], 90.00th=[ 310], 95.00th=[ 318], 00:18:10.912 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 603], 99.95th=[ 635], 00:18:10.912 | 99.99th=[ 635] 00:18:10.912 bw ( KiB/s): min= 4096, max= 8192, per=34.58%, avg=6144.00, stdev=2896.31, samples=2 00:18:10.912 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:18:10.912 lat (usec) : 250=43.96%, 500=55.65%, 750=0.16% 00:18:10.912 lat (msec) : 50=0.23% 00:18:10.912 cpu : usr=2.03%, sys=4.06%, ctx=2566, majf=0, minf=2 00:18:10.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:10.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.912 issued rwts: total=1030,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:10.912 00:18:10.912 Run status group 0 (all jobs): 00:18:10.912 READ: bw=9.83MiB/s (10.3MB/s), 84.9KiB/s-6138KiB/s (86.9kB/s-6285kB/s), io=10.2MiB (10.7MB), run=1001-1037msec 00:18:10.912 WRITE: bw=17.3MiB/s (18.2MB/s), 1975KiB/s-8176KiB/s (2022kB/s-8372kB/s), io=18.0MiB (18.9MB), run=1001-1037msec 00:18:10.912 00:18:10.912 Disk stats (read/write): 00:18:10.912 nvme0n1: ios=44/512, merge=0/0, ticks=1690/114, in_queue=1804, util=97.49% 00:18:10.912 nvme0n2: ios=1411/1536, merge=0/0, ticks=1319/316, in_queue=1635, util=99.28% 00:18:10.912 nvme0n3: ios=59/512, merge=0/0, ticks=1807/127, in_queue=1934, util=98.63% 00:18:10.912 nvme0n4: ios=1025/1536, merge=0/0, ticks=427/338, in_queue=765, util=89.63% 00:18:10.912 07:37:14 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:10.912 [global] 00:18:10.912 thread=1 00:18:10.912 invalidate=1 00:18:10.912 rw=write 00:18:10.912 time_based=1 00:18:10.912 runtime=1 00:18:10.912 ioengine=libaio 00:18:10.912 direct=1 00:18:10.912 bs=4096 00:18:10.912 iodepth=128 00:18:10.912 norandommap=0 00:18:10.912 numjobs=1 00:18:10.912 00:18:10.912 verify_dump=1 00:18:10.912 verify_backlog=512 00:18:10.912 verify_state_save=0 00:18:10.912 do_verify=1 00:18:10.912 verify=crc32c-intel 00:18:10.912 [job0] 00:18:10.912 filename=/dev/nvme0n1 00:18:10.912 [job1] 00:18:10.912 filename=/dev/nvme0n2 00:18:10.912 [job2] 00:18:10.912 filename=/dev/nvme0n3 00:18:10.912 [job3] 00:18:10.912 filename=/dev/nvme0n4 00:18:10.912 Could not set queue depth (nvme0n1) 00:18:10.912 Could not set queue depth (nvme0n2) 00:18:10.912 Could not set queue depth (nvme0n3) 00:18:10.912 Could not set queue depth (nvme0n4) 00:18:11.169 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:11.169 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:11.169 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:11.169 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:11.169 fio-3.35 00:18:11.169 Starting 4 threads 00:18:12.541 00:18:12.541 job0: (groupid=0, jobs=1): err= 0: pid=4131080: Mon Oct 7 07:37:16 2024 00:18:12.541 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:18:12.541 slat (nsec): min=1785, max=14588k, avg=110043.05, stdev=814045.04 00:18:12.541 clat (usec): min=1116, max=108768, avg=14019.99, stdev=11058.95 00:18:12.541 lat (usec): min=1122, max=108777, avg=14130.04, stdev=11170.24 00:18:12.541 clat percentiles (msec): 00:18:12.541 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:18:12.541 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:18:12.541 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 20], 95.00th=[ 27], 00:18:12.541 | 99.00th=[ 84], 99.50th=[ 99], 99.90th=[ 109], 99.95th=[ 109], 00:18:12.541 | 99.99th=[ 109] 00:18:12.541 write: IOPS=3691, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1003msec); 0 zone resets 00:18:12.541 slat (usec): min=2, max=13603, avg=145.41, stdev=927.27 00:18:12.541 clat (usec): min=818, max=111019, avg=20814.29, stdev=29273.36 00:18:12.541 lat (usec): min=827, max=111028, avg=20959.70, stdev=29468.79 00:18:12.541 clat percentiles (msec): 00:18:12.541 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:18:12.541 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:18:12.541 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 94], 95.00th=[ 102], 00:18:12.541 | 99.00th=[ 110], 99.50th=[ 111], 99.90th=[ 111], 99.95th=[ 111], 00:18:12.541 | 99.99th=[ 111] 00:18:12.541 bw ( KiB/s): min= 7792, max=20936, per=20.80%, avg=14364.00, stdev=9294.21, samples=2 00:18:12.541 iops : min= 1948, max= 5234, avg=3591.00, stdev=2323.55, samples=2 00:18:12.541 lat (usec) : 1000=0.04% 00:18:12.541 lat (msec) : 2=0.38%, 4=1.04%, 10=41.31%, 20=45.01%, 50=5.20% 00:18:12.541 lat (msec) : 100=3.58%, 250=3.43% 00:18:12.541 cpu : usr=2.10%, sys=4.29%, ctx=280, majf=0, minf=1 00:18:12.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:12.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.541 issued rwts: total=3584,3703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.541 job1: (groupid=0, jobs=1): err= 0: pid=4131081: Mon Oct 7 07:37:16 2024 00:18:12.541 read: IOPS=4419, BW=17.3MiB/s (18.1MB/s)(18.0MiB/1045msec) 00:18:12.541 slat (nsec): min=1213, max=18251k, avg=109360.39, stdev=827601.89 00:18:12.541 clat (usec): min=5138, max=49718, avg=13907.66, stdev=6438.25 00:18:12.541 lat (usec): min=5144, max=49729, avg=14017.02, stdev=6493.56 00:18:12.541 clat percentiles (usec): 00:18:12.541 | 1.00th=[ 6718], 5.00th=[ 7504], 10.00th=[ 8291], 20.00th=[ 9110], 00:18:12.541 | 30.00th=[10028], 40.00th=[10421], 50.00th=[11338], 60.00th=[12780], 00:18:12.541 | 70.00th=[15139], 80.00th=[19792], 90.00th=[23725], 95.00th=[26870], 00:18:12.541 | 99.00th=[33817], 99.50th=[40109], 99.90th=[49546], 99.95th=[49546], 00:18:12.541 | 99.99th=[49546] 00:18:12.541 write: IOPS=4899, BW=19.1MiB/s (20.1MB/s)(20.0MiB/1045msec); 0 zone resets 00:18:12.541 slat (nsec): min=1902, max=12549k, avg=86675.16, stdev=593326.58 00:18:12.541 clat (usec): min=803, max=60297, avg=13364.61, stdev=9844.36 00:18:12.541 lat (usec): min=812, max=61740, avg=13451.28, stdev=9874.18 00:18:12.541 clat percentiles (usec): 00:18:12.541 | 1.00th=[ 3490], 5.00th=[ 5145], 10.00th=[ 6456], 20.00th=[ 7111], 00:18:12.541 | 30.00th=[ 8094], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[10683], 00:18:12.541 | 70.00th=[13566], 80.00th=[16909], 90.00th=[24249], 95.00th=[35390], 00:18:12.541 | 99.00th=[59507], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:18:12.541 | 99.99th=[60556] 00:18:12.541 bw ( KiB/s): min=17088, max=22928, per=28.98%, avg=20008.00, stdev=4129.50, samples=2 00:18:12.542 iops : min= 4272, max= 5732, avg=5002.00, stdev=1032.38, samples=2 00:18:12.542 lat (usec) : 1000=0.03% 00:18:12.542 lat (msec) : 2=0.09%, 4=1.01%, 10=36.37%, 20=46.47%, 50=14.95% 00:18:12.542 lat (msec) : 100=1.08% 00:18:12.542 cpu : usr=4.21%, sys=5.56%, ctx=392, majf=0, minf=1 00:18:12.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:12.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.542 issued rwts: total=4618,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.542 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.542 job2: (groupid=0, jobs=1): err= 0: pid=4131082: Mon Oct 7 07:37:16 2024 00:18:12.542 read: IOPS=4962, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1004msec) 00:18:12.542 slat (nsec): min=1599, max=42075k, avg=100940.63, stdev=790301.43 00:18:12.542 clat (usec): min=2592, max=55175, avg=12326.20, stdev=4546.79 00:18:12.542 lat (usec): min=4853, max=55182, avg=12427.14, stdev=4598.18 00:18:12.542 clat percentiles (usec): 00:18:12.542 | 1.00th=[ 6783], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[10159], 00:18:12.542 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11600], 60.00th=[11994], 00:18:12.542 | 70.00th=[12387], 80.00th=[13566], 90.00th=[15664], 95.00th=[17433], 00:18:12.542 | 99.00th=[20841], 99.50th=[53740], 99.90th=[55313], 99.95th=[55313], 00:18:12.542 | 99.99th=[55313] 00:18:12.542 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:18:12.542 slat (usec): min=2, max=6054, avg=91.78, stdev=510.93 00:18:12.542 clat (usec): min=4945, max=55540, avg=12822.16, stdev=5823.30 00:18:12.542 lat (usec): min=4955, max=55877, avg=12913.95, stdev=5834.75 00:18:12.542 clat percentiles (usec): 00:18:12.542 | 1.00th=[ 7635], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[10945], 00:18:12.542 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:18:12.542 | 70.00th=[12387], 80.00th=[12518], 90.00th=[14353], 95.00th=[16909], 00:18:12.542 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:18:12.542 | 99.99th=[55313] 00:18:12.542 bw ( KiB/s): min=19208, max=21752, per=29.66%, avg=20480.00, stdev=1798.88, samples=2 00:18:12.542 iops : min= 4802, max= 5438, avg=5120.00, stdev=449.72, samples=2 00:18:12.542 lat (msec) : 4=0.01%, 10=14.09%, 20=83.21%, 50=1.44%, 100=1.26% 00:18:12.542 cpu : usr=3.99%, sys=5.48%, ctx=506, majf=0, minf=1 00:18:12.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:12.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.542 issued rwts: total=4982,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.542 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.542 job3: (groupid=0, jobs=1): err= 0: pid=4131083: Mon Oct 7 07:37:16 2024 00:18:12.542 read: IOPS=3945, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1005msec) 00:18:12.542 slat (nsec): min=1610, max=32540k, avg=126007.21, stdev=1046649.21 00:18:12.542 clat (usec): min=1117, max=59847, avg=16376.68, stdev=8528.89 00:18:12.542 lat (usec): min=4409, max=59857, avg=16502.69, stdev=8587.43 00:18:12.542 clat percentiles (usec): 00:18:12.542 | 1.00th=[ 6915], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10814], 00:18:12.542 | 30.00th=[11207], 40.00th=[11994], 50.00th=[12649], 60.00th=[15533], 00:18:12.542 | 70.00th=[17171], 80.00th=[20841], 90.00th=[27395], 95.00th=[34866], 00:18:12.542 | 99.00th=[53740], 99.50th=[57410], 99.90th=[60031], 99.95th=[60031], 00:18:12.542 | 99.99th=[60031] 00:18:12.542 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:18:12.542 slat (usec): min=2, max=23651, avg=115.20, stdev=860.53 00:18:12.542 clat (usec): min=3011, max=64538, avg=14498.94, stdev=9077.79 00:18:12.542 lat (usec): min=3022, max=64548, avg=14614.14, stdev=9129.38 00:18:12.542 clat percentiles (usec): 00:18:12.542 | 1.00th=[ 5342], 5.00th=[ 6849], 10.00th=[ 7767], 20.00th=[ 9110], 00:18:12.542 | 30.00th=[ 9896], 40.00th=[11207], 50.00th=[11731], 60.00th=[13173], 00:18:12.542 | 70.00th=[16057], 80.00th=[17433], 90.00th=[22152], 95.00th=[30016], 00:18:12.542 | 99.00th=[61604], 99.50th=[63701], 99.90th=[64750], 99.95th=[64750], 00:18:12.542 | 99.99th=[64750] 00:18:12.542 bw ( KiB/s): min=12288, max=20480, per=23.73%, avg=16384.00, stdev=5792.62, samples=2 00:18:12.542 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:18:12.542 lat (msec) : 2=0.01%, 4=0.19%, 10=19.49%, 20=63.44%, 50=15.11% 00:18:12.542 lat (msec) : 100=1.76% 00:18:12.542 cpu : usr=3.59%, sys=6.18%, ctx=257, majf=0, minf=1 00:18:12.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:12.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.542 issued rwts: total=3965,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.542 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.542 00:18:12.542 Run status group 0 (all jobs): 00:18:12.542 READ: bw=64.1MiB/s (67.2MB/s), 14.0MiB/s-19.4MiB/s (14.6MB/s-20.3MB/s), io=67.0MiB (70.2MB), run=1003-1045msec 00:18:12.542 WRITE: bw=67.4MiB/s (70.7MB/s), 14.4MiB/s-19.9MiB/s (15.1MB/s-20.9MB/s), io=70.5MiB (73.9MB), run=1003-1045msec 00:18:12.542 00:18:12.542 Disk stats (read/write): 00:18:12.542 nvme0n1: ios=2610/2752, merge=0/0, ticks=34105/63056, in_queue=97161, util=86.87% 00:18:12.542 nvme0n2: ios=4138/4279, merge=0/0, ticks=51025/45836, in_queue=96861, util=87.92% 00:18:12.542 nvme0n3: ios=4096/4608, merge=0/0, ticks=24265/25300, in_queue=49565, util=88.94% 00:18:12.542 nvme0n4: ios=3604/3711, merge=0/0, ticks=51986/49783, in_queue=101769, util=98.00% 00:18:12.542 07:37:16 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:12.542 [global] 00:18:12.542 thread=1 00:18:12.542 invalidate=1 00:18:12.542 rw=randwrite 00:18:12.542 time_based=1 00:18:12.542 runtime=1 00:18:12.542 ioengine=libaio 00:18:12.542 direct=1 00:18:12.542 bs=4096 00:18:12.542 iodepth=128 00:18:12.542 norandommap=0 00:18:12.542 numjobs=1 00:18:12.542 00:18:12.542 verify_dump=1 00:18:12.542 verify_backlog=512 00:18:12.542 verify_state_save=0 00:18:12.542 do_verify=1 00:18:12.542 verify=crc32c-intel 00:18:12.542 [job0] 00:18:12.542 filename=/dev/nvme0n1 00:18:12.542 [job1] 00:18:12.542 filename=/dev/nvme0n2 00:18:12.542 [job2] 00:18:12.542 filename=/dev/nvme0n3 00:18:12.542 [job3] 00:18:12.542 filename=/dev/nvme0n4 00:18:12.542 Could not set queue depth (nvme0n1) 00:18:12.542 Could not set queue depth (nvme0n2) 00:18:12.542 Could not set queue depth (nvme0n3) 00:18:12.542 Could not set queue depth (nvme0n4) 00:18:12.542 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:12.542 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:12.542 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:12.542 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:12.542 fio-3.35 00:18:12.542 Starting 4 threads 00:18:13.915 00:18:13.915 job0: (groupid=0, jobs=1): err= 0: pid=4131450: Mon Oct 7 07:37:17 2024 00:18:13.915 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:18:13.915 slat (nsec): min=1465, max=13280k, avg=132175.32, stdev=916987.78 00:18:13.915 clat (usec): min=3997, max=82413, avg=17496.75, stdev=13578.36 00:18:13.915 lat (usec): min=4002, max=82420, avg=17628.92, stdev=13669.62 00:18:13.915 clat percentiles (usec): 00:18:13.915 | 1.00th=[ 5866], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[ 9372], 00:18:13.915 | 30.00th=[10159], 40.00th=[12911], 50.00th=[13960], 60.00th=[15139], 00:18:13.915 | 70.00th=[16909], 80.00th=[19530], 90.00th=[27919], 95.00th=[47973], 00:18:13.915 | 99.00th=[74974], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:18:13.915 | 99.99th=[82314] 00:18:13.915 write: IOPS=3689, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1004msec); 0 zone resets 00:18:13.915 slat (usec): min=2, max=11671, avg=131.21, stdev=728.77 00:18:13.915 clat (usec): min=3037, max=94962, avg=17330.76, stdev=15212.82 00:18:13.915 lat (usec): min=3045, max=94973, avg=17461.97, stdev=15303.87 00:18:13.915 clat percentiles (usec): 00:18:13.915 | 1.00th=[ 4490], 5.00th=[ 6718], 10.00th=[ 7898], 20.00th=[ 9634], 00:18:13.915 | 30.00th=[10028], 40.00th=[10421], 50.00th=[11994], 60.00th=[14091], 00:18:13.915 | 70.00th=[17433], 80.00th=[22152], 90.00th=[29230], 95.00th=[46400], 00:18:13.915 | 99.00th=[89654], 99.50th=[92799], 99.90th=[94897], 99.95th=[94897], 00:18:13.915 | 99.99th=[94897] 00:18:13.915 bw ( KiB/s): min= 8208, max=20464, per=20.73%, avg=14336.00, stdev=8666.30, samples=2 00:18:13.915 iops : min= 2052, max= 5116, avg=3584.00, stdev=2166.58, samples=2 00:18:13.915 lat (msec) : 4=0.48%, 10=28.58%, 20=48.22%, 50=18.15%, 100=4.57% 00:18:13.915 cpu : usr=2.89%, sys=4.79%, ctx=389, majf=0, minf=1 00:18:13.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:13.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:13.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:13.915 issued rwts: total=3584,3704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:13.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:13.915 job1: (groupid=0, jobs=1): err= 0: pid=4131451: Mon Oct 7 07:37:17 2024 00:18:13.915 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:18:13.915 slat (nsec): min=1075, max=18477k, avg=86404.53, stdev=715558.62 00:18:13.915 clat (usec): min=2230, max=34690, avg=11820.12, stdev=5161.58 00:18:13.915 lat (usec): min=2232, max=34697, avg=11906.53, stdev=5201.94 00:18:13.915 clat percentiles (usec): 00:18:13.915 | 1.00th=[ 3392], 5.00th=[ 4948], 10.00th=[ 6718], 20.00th=[ 7963], 00:18:13.915 | 30.00th=[ 8848], 40.00th=[10290], 50.00th=[11076], 60.00th=[11469], 00:18:13.915 | 70.00th=[12649], 80.00th=[14746], 90.00th=[19530], 95.00th=[23200], 00:18:13.915 | 99.00th=[28181], 99.50th=[32113], 99.90th=[34866], 99.95th=[34866], 00:18:13.915 | 99.99th=[34866] 00:18:13.915 write: IOPS=5473, BW=21.4MiB/s (22.4MB/s)(21.5MiB/1007msec); 0 zone resets 00:18:13.915 slat (nsec): min=1875, max=12839k, avg=76437.12, stdev=506916.22 00:18:13.915 clat (usec): min=668, max=44863, avg=12203.95, stdev=6864.06 00:18:13.915 lat (usec): min=675, max=44866, avg=12280.39, stdev=6889.77 00:18:13.915 clat percentiles (usec): 00:18:13.915 | 1.00th=[ 2057], 5.00th=[ 4228], 10.00th=[ 5604], 20.00th=[ 8029], 00:18:13.915 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11338], 00:18:13.915 | 70.00th=[12518], 80.00th=[16712], 90.00th=[21627], 95.00th=[25560], 00:18:13.915 | 99.00th=[39060], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:18:13.915 | 99.99th=[44827] 00:18:13.915 bw ( KiB/s): min=18504, max=24568, per=31.14%, avg=21536.00, stdev=4287.90, samples=2 00:18:13.915 iops : min= 4626, max= 6142, avg=5384.00, stdev=1071.97, samples=2 00:18:13.915 lat (usec) : 750=0.05% 00:18:13.915 lat (msec) : 2=0.46%, 4=3.11%, 10=37.84%, 20=48.48%, 50=10.06% 00:18:13.915 cpu : usr=3.68%, sys=4.27%, ctx=513, majf=0, minf=2 00:18:13.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:13.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:13.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:13.915 issued rwts: total=5120,5512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:13.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:13.915 job2: (groupid=0, jobs=1): err= 0: pid=4131452: Mon Oct 7 07:37:17 2024 00:18:13.915 read: IOPS=4513, BW=17.6MiB/s (18.5MB/s)(17.8MiB/1007msec) 00:18:13.915 slat (nsec): min=1298, max=13177k, avg=99686.73, stdev=667895.65 00:18:13.915 clat (usec): min=3323, max=31859, avg=13769.10, stdev=3898.77 00:18:13.915 lat (usec): min=4182, max=31881, avg=13868.79, stdev=3941.52 00:18:13.915 clat percentiles (usec): 00:18:13.915 | 1.00th=[ 5669], 5.00th=[ 8291], 10.00th=[ 9503], 20.00th=[10552], 00:18:13.915 | 30.00th=[11600], 40.00th=[12125], 50.00th=[13304], 60.00th=[14353], 00:18:13.915 | 70.00th=[15664], 80.00th=[17171], 90.00th=[18482], 95.00th=[19268], 00:18:13.915 | 99.00th=[27657], 99.50th=[27919], 99.90th=[27919], 99.95th=[27919], 00:18:13.915 | 99.99th=[31851] 00:18:13.915 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:18:13.915 slat (nsec): min=1727, max=16465k, avg=99604.72, stdev=642271.80 00:18:13.915 clat (usec): min=672, max=49405, avg=14055.58, stdev=7111.81 00:18:13.915 lat (usec): min=684, max=49414, avg=14155.18, stdev=7162.77 00:18:13.915 clat percentiles (usec): 00:18:13.915 | 1.00th=[ 2024], 5.00th=[ 4555], 10.00th=[ 5997], 20.00th=[ 8455], 00:18:13.915 | 30.00th=[10028], 40.00th=[10945], 50.00th=[12387], 60.00th=[15795], 00:18:13.915 | 70.00th=[17171], 80.00th=[18744], 90.00th=[22152], 95.00th=[24773], 00:18:13.915 | 99.00th=[41157], 99.50th=[42206], 99.90th=[49546], 99.95th=[49546], 00:18:13.915 | 99.99th=[49546] 00:18:13.915 bw ( KiB/s): min=16384, max=20480, per=26.66%, avg=18432.00, stdev=2896.31, samples=2 00:18:13.915 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:18:13.915 lat (usec) : 750=0.05%, 1000=0.09% 00:18:13.915 lat (msec) : 2=0.32%, 4=1.74%, 10=19.35%, 20=68.88%, 50=9.57% 00:18:13.915 cpu : usr=3.48%, sys=5.27%, ctx=392, majf=0, minf=1 00:18:13.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:13.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:13.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:13.915 issued rwts: total=4545,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:13.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:13.915 job3: (groupid=0, jobs=1): err= 0: pid=4131453: Mon Oct 7 07:37:17 2024 00:18:13.915 read: IOPS=3363, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1007msec) 00:18:13.915 slat (nsec): min=1140, max=17253k, avg=151937.65, stdev=970830.07 00:18:13.915 clat (usec): min=389, max=58454, avg=19825.83, stdev=11021.26 00:18:13.915 lat (usec): min=396, max=58469, avg=19977.77, stdev=11103.52 00:18:13.915 clat percentiles (usec): 00:18:13.915 | 1.00th=[ 2507], 5.00th=[ 5997], 10.00th=[ 8356], 20.00th=[10159], 00:18:13.915 | 30.00th=[11469], 40.00th=[14222], 50.00th=[17695], 60.00th=[20055], 00:18:13.915 | 70.00th=[26346], 80.00th=[29754], 90.00th=[36439], 95.00th=[41681], 00:18:13.915 | 99.00th=[46400], 99.50th=[48497], 99.90th=[48497], 99.95th=[51119], 00:18:13.915 | 99.99th=[58459] 00:18:13.915 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:18:13.915 slat (nsec): min=1807, max=17977k, avg=122331.36, stdev=753390.45 00:18:13.915 clat (usec): min=750, max=67650, avg=16734.82, stdev=9372.79 00:18:13.915 lat (usec): min=1748, max=67658, avg=16857.15, stdev=9399.10 00:18:13.915 clat percentiles (usec): 00:18:13.915 | 1.00th=[ 4752], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 9503], 00:18:13.915 | 30.00th=[11469], 40.00th=[11731], 50.00th=[13698], 60.00th=[17695], 00:18:13.915 | 70.00th=[19792], 80.00th=[23725], 90.00th=[32113], 95.00th=[33817], 00:18:13.915 | 99.00th=[46400], 99.50th=[51119], 99.90th=[67634], 99.95th=[67634], 00:18:13.915 | 99.99th=[67634] 00:18:13.916 bw ( KiB/s): min=12288, max=16384, per=20.73%, avg=14336.00, stdev=2896.31, samples=2 00:18:13.916 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:18:13.916 lat (usec) : 500=0.09%, 750=0.01%, 1000=0.01% 00:18:13.916 lat (msec) : 2=0.33%, 4=0.77%, 10=18.68%, 20=45.55%, 50=34.17% 00:18:13.916 lat (msec) : 100=0.39% 00:18:13.916 cpu : usr=3.18%, sys=4.17%, ctx=340, majf=0, minf=1 00:18:13.916 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:13.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:13.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:13.916 issued rwts: total=3387,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:13.916 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:13.916 00:18:13.916 Run status group 0 (all jobs): 00:18:13.916 READ: bw=64.5MiB/s (67.7MB/s), 13.1MiB/s-19.9MiB/s (13.8MB/s-20.8MB/s), io=65.0MiB (68.1MB), run=1004-1007msec 00:18:13.916 WRITE: bw=67.5MiB/s (70.8MB/s), 13.9MiB/s-21.4MiB/s (14.6MB/s-22.4MB/s), io=68.0MiB (71.3MB), run=1004-1007msec 00:18:13.916 00:18:13.916 Disk stats (read/write): 00:18:13.916 nvme0n1: ios=3109/3535, merge=0/0, ticks=39022/39727, in_queue=78749, util=86.76% 00:18:13.916 nvme0n2: ios=4115/4343, merge=0/0, ticks=44533/41947, in_queue=86480, util=99.38% 00:18:13.916 nvme0n3: ios=3273/3584, merge=0/0, ticks=33326/33366, in_queue=66692, util=87.55% 00:18:13.916 nvme0n4: ios=2560/2846, merge=0/0, ticks=23565/18797, in_queue=42362, util=89.20% 00:18:13.916 07:37:17 -- target/fio.sh@55 -- # sync 00:18:13.916 07:37:17 -- target/fio.sh@59 -- # fio_pid=4131682 00:18:13.916 07:37:17 -- target/fio.sh@61 -- # sleep 3 00:18:13.916 07:37:17 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:13.916 [global] 00:18:13.916 thread=1 00:18:13.916 invalidate=1 00:18:13.916 rw=read 00:18:13.916 time_based=1 00:18:13.916 runtime=10 00:18:13.916 ioengine=libaio 00:18:13.916 direct=1 00:18:13.916 bs=4096 00:18:13.916 iodepth=1 00:18:13.916 norandommap=1 00:18:13.916 numjobs=1 00:18:13.916 00:18:13.916 [job0] 00:18:13.916 filename=/dev/nvme0n1 00:18:13.916 [job1] 00:18:13.916 filename=/dev/nvme0n2 00:18:13.916 [job2] 00:18:13.916 filename=/dev/nvme0n3 00:18:13.916 [job3] 00:18:13.916 filename=/dev/nvme0n4 00:18:13.916 Could not set queue depth (nvme0n1) 00:18:13.916 Could not set queue depth (nvme0n2) 00:18:13.916 Could not set queue depth (nvme0n3) 00:18:13.916 Could not set queue depth (nvme0n4) 00:18:14.173 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:14.173 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:14.173 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:14.173 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:14.173 fio-3.35 00:18:14.173 Starting 4 threads 00:18:17.446 07:37:20 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:17.446 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=21770240, buflen=4096 00:18:17.446 fio: pid=4131828, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:18:17.446 07:37:20 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:17.446 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=13422592, buflen=4096 00:18:17.446 fio: pid=4131827, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:18:17.446 07:37:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:17.446 07:37:21 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:17.446 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=34947072, buflen=4096 00:18:17.446 fio: pid=4131825, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:18:17.446 07:37:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:17.446 07:37:21 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:17.704 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=31875072, buflen=4096 00:18:17.704 fio: pid=4131826, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:18:17.704 07:37:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:17.704 07:37:21 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:17.704 00:18:17.704 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4131825: Mon Oct 7 07:37:21 2024 00:18:17.704 read: IOPS=2735, BW=10.7MiB/s (11.2MB/s)(33.3MiB/3119msec) 00:18:17.704 slat (usec): min=7, max=24640, avg=18.11, stdev=394.36 00:18:17.704 clat (usec): min=263, max=41503, avg=342.06, stdev=882.27 00:18:17.704 lat (usec): min=271, max=41514, avg=360.17, stdev=967.06 00:18:17.704 clat percentiles (usec): 00:18:17.704 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:18:17.705 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:18:17.705 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 347], 95.00th=[ 379], 00:18:17.705 | 99.00th=[ 441], 99.50th=[ 461], 99.90th=[ 586], 99.95th=[ 898], 00:18:17.705 | 99.99th=[41681] 00:18:17.705 bw ( KiB/s): min= 7384, max=12320, per=36.79%, avg=11026.67, stdev=1938.22, samples=6 00:18:17.705 iops : min= 1846, max= 3080, avg=2756.67, stdev=484.56, samples=6 00:18:17.705 lat (usec) : 500=99.71%, 750=0.21%, 1000=0.02% 00:18:17.705 lat (msec) : 50=0.05% 00:18:17.705 cpu : usr=1.54%, sys=4.87%, ctx=8539, majf=0, minf=1 00:18:17.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.705 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.705 issued rwts: total=8533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.705 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4131826: Mon Oct 7 07:37:21 2024 00:18:17.705 read: IOPS=2341, BW=9365KiB/s (9589kB/s)(30.4MiB/3324msec) 00:18:17.705 slat (usec): min=6, max=17019, avg=17.76, stdev=356.32 00:18:17.705 clat (usec): min=272, max=41170, avg=405.22, stdev=1478.27 00:18:17.705 lat (usec): min=279, max=41179, avg=422.98, stdev=1520.91 00:18:17.705 clat percentiles (usec): 00:18:17.705 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 314], 00:18:17.705 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 347], 60.00th=[ 363], 00:18:17.705 | 70.00th=[ 371], 80.00th=[ 379], 90.00th=[ 392], 95.00th=[ 408], 00:18:17.705 | 99.00th=[ 506], 99.50th=[ 545], 99.90th=[41157], 99.95th=[41157], 00:18:17.705 | 99.99th=[41157] 00:18:17.705 bw ( KiB/s): min= 6216, max=11992, per=30.83%, avg=9240.17, stdev=2287.94, samples=6 00:18:17.705 iops : min= 1554, max= 2998, avg=2310.00, stdev=571.94, samples=6 00:18:17.705 lat (usec) : 500=98.84%, 750=0.95%, 1000=0.03% 00:18:17.705 lat (msec) : 2=0.01%, 4=0.01%, 50=0.14% 00:18:17.705 cpu : usr=1.29%, sys=4.09%, ctx=7790, majf=0, minf=2 00:18:17.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.705 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.705 issued rwts: total=7783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.705 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4131827: Mon Oct 7 07:37:21 2024 00:18:17.705 read: IOPS=1126, BW=4506KiB/s (4614kB/s)(12.8MiB/2909msec) 00:18:17.705 slat (usec): min=6, max=14802, avg=16.40, stdev=326.48 00:18:17.705 clat (usec): min=246, max=42004, avg=863.05, stdev=4624.59 00:18:17.705 lat (usec): min=255, max=42013, avg=879.45, stdev=4636.00 00:18:17.705 clat percentiles (usec): 00:18:17.705 | 1.00th=[ 277], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 314], 00:18:17.705 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 330], 00:18:17.705 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 404], 00:18:17.705 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:17.705 | 99.99th=[42206] 00:18:17.705 bw ( KiB/s): min= 104, max=10192, per=11.72%, avg=3513.60, stdev=4780.66, samples=5 00:18:17.705 iops : min= 26, max= 2548, avg=878.40, stdev=1195.17, samples=5 00:18:17.705 lat (usec) : 250=0.03%, 500=98.08%, 750=0.49%, 1000=0.03% 00:18:17.705 lat (msec) : 2=0.03%, 50=1.31% 00:18:17.705 cpu : usr=0.21%, sys=1.27%, ctx=3280, majf=0, minf=2 00:18:17.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.705 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.705 issued rwts: total=3278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.705 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4131828: Mon Oct 7 07:37:21 2024 00:18:17.705 read: IOPS=1954, BW=7816KiB/s (8004kB/s)(20.8MiB/2720msec) 00:18:17.705 slat (nsec): min=6928, max=41852, avg=8382.64, stdev=1687.67 00:18:17.705 clat (usec): min=283, max=41870, avg=496.41, stdev=2044.03 00:18:17.705 lat (usec): min=292, max=41879, avg=504.79, stdev=2044.31 00:18:17.705 clat percentiles (usec): 00:18:17.705 | 1.00th=[ 310], 5.00th=[ 330], 10.00th=[ 363], 20.00th=[ 371], 00:18:17.705 | 30.00th=[ 375], 40.00th=[ 379], 50.00th=[ 383], 60.00th=[ 388], 00:18:17.705 | 70.00th=[ 396], 80.00th=[ 404], 90.00th=[ 433], 95.00th=[ 474], 00:18:17.705 | 99.00th=[ 515], 99.50th=[ 578], 99.90th=[41157], 99.95th=[41157], 00:18:17.705 | 99.99th=[41681] 00:18:17.705 bw ( KiB/s): min= 5960, max=10128, per=25.91%, avg=7766.40, stdev=2155.47, samples=5 00:18:17.705 iops : min= 1490, max= 2532, avg=1941.60, stdev=538.87, samples=5 00:18:17.705 lat (usec) : 500=97.99%, 750=1.71% 00:18:17.705 lat (msec) : 20=0.02%, 50=0.26% 00:18:17.705 cpu : usr=1.40%, sys=3.05%, ctx=5316, majf=0, minf=2 00:18:17.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.705 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.705 issued rwts: total=5316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.705 00:18:17.705 Run status group 0 (all jobs): 00:18:17.705 READ: bw=29.3MiB/s (30.7MB/s), 4506KiB/s-10.7MiB/s (4614kB/s-11.2MB/s), io=97.3MiB (102MB), run=2720-3324msec 00:18:17.705 00:18:17.705 Disk stats (read/write): 00:18:17.705 nvme0n1: ios=8575/0, merge=0/0, ticks=3031/0, in_queue=3031, util=97.10% 00:18:17.705 nvme0n2: ios=7214/0, merge=0/0, ticks=2895/0, in_queue=2895, util=94.65% 00:18:17.705 nvme0n3: ios=3178/0, merge=0/0, ticks=2780/0, in_queue=2780, util=95.67% 00:18:17.705 nvme0n4: ios=5099/0, merge=0/0, ticks=2492/0, in_queue=2492, util=96.41% 00:18:17.963 07:37:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:17.963 07:37:21 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:18.220 07:37:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:18.220 07:37:21 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:18.220 07:37:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:18.220 07:37:22 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:18.477 07:37:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:18.477 07:37:22 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:18.735 07:37:22 -- target/fio.sh@69 -- # fio_status=0 00:18:18.735 07:37:22 -- target/fio.sh@70 -- # wait 4131682 00:18:18.735 07:37:22 -- target/fio.sh@70 -- # fio_status=4 00:18:18.735 07:37:22 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:18.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.735 07:37:22 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:18.735 07:37:22 -- common/autotest_common.sh@1198 -- # local i=0 00:18:18.735 07:37:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:18.735 07:37:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:18.735 07:37:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:18.735 07:37:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:18.993 07:37:22 -- common/autotest_common.sh@1210 -- # return 0 00:18:18.993 07:37:22 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:18.993 07:37:22 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:18.993 nvmf hotplug test: fio failed as expected 00:18:18.993 07:37:22 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:18.993 07:37:22 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:18.993 07:37:22 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:18.993 07:37:22 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:18.993 07:37:22 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:18.993 07:37:22 -- target/fio.sh@91 -- # nvmftestfini 00:18:18.993 07:37:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:18.993 07:37:22 -- nvmf/common.sh@116 -- # sync 00:18:18.993 07:37:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:18.993 07:37:22 -- nvmf/common.sh@119 -- # set +e 00:18:18.993 07:37:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:18.993 07:37:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:18.993 rmmod nvme_tcp 00:18:18.993 rmmod nvme_fabrics 00:18:19.251 rmmod nvme_keyring 00:18:19.251 07:37:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:19.251 07:37:22 -- nvmf/common.sh@123 -- # set -e 00:18:19.251 07:37:22 -- nvmf/common.sh@124 -- # return 0 00:18:19.251 07:37:22 -- nvmf/common.sh@477 -- # '[' -n 4128772 ']' 00:18:19.251 07:37:22 -- nvmf/common.sh@478 -- # killprocess 4128772 00:18:19.251 07:37:22 -- common/autotest_common.sh@926 -- # '[' -z 4128772 ']' 00:18:19.251 07:37:22 -- common/autotest_common.sh@930 -- # kill -0 4128772 00:18:19.251 07:37:22 -- common/autotest_common.sh@931 -- # uname 00:18:19.251 07:37:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:19.251 07:37:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4128772 00:18:19.251 07:37:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:19.251 07:37:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:19.251 07:37:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4128772' 00:18:19.251 killing process with pid 4128772 00:18:19.251 07:37:23 -- common/autotest_common.sh@945 -- # kill 4128772 00:18:19.251 07:37:23 -- common/autotest_common.sh@950 -- # wait 4128772 00:18:19.510 07:37:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:19.510 07:37:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:19.510 07:37:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:19.510 07:37:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.510 07:37:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:19.510 07:37:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.510 07:37:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.510 07:37:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.409 07:37:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:21.409 00:18:21.409 real 0m26.303s 00:18:21.409 user 1m48.967s 00:18:21.409 sys 0m7.830s 00:18:21.409 07:37:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:21.409 07:37:25 -- common/autotest_common.sh@10 -- # set +x 00:18:21.410 ************************************ 00:18:21.410 END TEST nvmf_fio_target 00:18:21.410 ************************************ 00:18:21.410 07:37:25 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:21.410 07:37:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:21.410 07:37:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:21.410 07:37:25 -- common/autotest_common.sh@10 -- # set +x 00:18:21.410 ************************************ 00:18:21.410 START TEST nvmf_bdevio 00:18:21.410 ************************************ 00:18:21.410 07:37:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:21.667 * Looking for test storage... 00:18:21.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:21.667 07:37:25 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:21.667 07:37:25 -- nvmf/common.sh@7 -- # uname -s 00:18:21.667 07:37:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.667 07:37:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.667 07:37:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.667 07:37:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.667 07:37:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.667 07:37:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.667 07:37:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.667 07:37:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.667 07:37:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.667 07:37:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.667 07:37:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:21.667 07:37:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:21.667 07:37:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.667 07:37:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.667 07:37:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:21.667 07:37:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:21.667 07:37:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.667 07:37:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.667 07:37:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.667 07:37:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.667 07:37:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.667 07:37:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.667 07:37:25 -- paths/export.sh@5 -- # export PATH 00:18:21.667 07:37:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.667 07:37:25 -- nvmf/common.sh@46 -- # : 0 00:18:21.668 07:37:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:21.668 07:37:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:21.668 07:37:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:21.668 07:37:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.668 07:37:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.668 07:37:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:21.668 07:37:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:21.668 07:37:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:21.668 07:37:25 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:21.668 07:37:25 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:21.668 07:37:25 -- target/bdevio.sh@14 -- # nvmftestinit 00:18:21.668 07:37:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:21.668 07:37:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.668 07:37:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:21.668 07:37:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:21.668 07:37:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:21.668 07:37:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.668 07:37:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.668 07:37:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.668 07:37:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:21.668 07:37:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:21.668 07:37:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:21.668 07:37:25 -- common/autotest_common.sh@10 -- # set +x 00:18:26.927 07:37:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:26.927 07:37:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:26.927 07:37:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:26.927 07:37:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:26.927 07:37:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:26.927 07:37:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:26.927 07:37:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:26.927 07:37:30 -- nvmf/common.sh@294 -- # net_devs=() 00:18:26.927 07:37:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:26.927 07:37:30 -- nvmf/common.sh@295 -- # e810=() 00:18:26.927 07:37:30 -- nvmf/common.sh@295 -- # local -ga e810 00:18:26.927 07:37:30 -- nvmf/common.sh@296 -- # x722=() 00:18:26.927 07:37:30 -- nvmf/common.sh@296 -- # local -ga x722 00:18:26.927 07:37:30 -- nvmf/common.sh@297 -- # mlx=() 00:18:26.927 07:37:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:26.927 07:37:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:26.927 07:37:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:26.927 07:37:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:26.927 07:37:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:26.927 07:37:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:26.927 07:37:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:26.927 07:37:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:26.927 07:37:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:26.927 07:37:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:26.927 07:37:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:26.927 07:37:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:26.927 07:37:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:26.927 07:37:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:26.927 07:37:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:26.927 07:37:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:26.927 07:37:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:26.927 07:37:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:26.927 07:37:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:26.927 07:37:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:26.927 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:26.927 07:37:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:26.927 07:37:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:26.927 07:37:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.927 07:37:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.927 07:37:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:26.927 07:37:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:26.927 07:37:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:26.927 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:26.927 07:37:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:26.927 07:37:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:26.927 07:37:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.927 07:37:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.927 07:37:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:26.927 07:37:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:26.927 07:37:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:26.927 07:37:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:26.927 07:37:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:26.927 07:37:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.927 07:37:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:26.927 07:37:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.927 07:37:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:26.927 Found net devices under 0000:af:00.0: cvl_0_0 00:18:26.927 07:37:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.927 07:37:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:26.927 07:37:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.928 07:37:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:26.928 07:37:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.928 07:37:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:26.928 Found net devices under 0000:af:00.1: cvl_0_1 00:18:26.928 07:37:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.928 07:37:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:26.928 07:37:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:26.928 07:37:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:26.928 07:37:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:26.928 07:37:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:26.928 07:37:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.928 07:37:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.928 07:37:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:26.928 07:37:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:26.928 07:37:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:26.928 07:37:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:26.928 07:37:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:26.928 07:37:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:26.928 07:37:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.928 07:37:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:26.928 07:37:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:26.928 07:37:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:26.928 07:37:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:26.928 07:37:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:26.928 07:37:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:26.928 07:37:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:26.928 07:37:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:26.928 07:37:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:26.928 07:37:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:26.928 07:37:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:26.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:18:26.928 00:18:26.928 --- 10.0.0.2 ping statistics --- 00:18:26.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.928 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:18:26.928 07:37:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:18:27.185 00:18:27.185 --- 10.0.0.1 ping statistics --- 00:18:27.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.185 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:18:27.185 07:37:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.185 07:37:30 -- nvmf/common.sh@410 -- # return 0 00:18:27.185 07:37:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:27.185 07:37:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.185 07:37:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:27.185 07:37:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:27.185 07:37:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.185 07:37:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:27.185 07:37:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:27.185 07:37:30 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:27.185 07:37:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:27.185 07:37:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:27.185 07:37:30 -- common/autotest_common.sh@10 -- # set +x 00:18:27.185 07:37:30 -- nvmf/common.sh@469 -- # nvmfpid=4136011 00:18:27.185 07:37:30 -- nvmf/common.sh@470 -- # waitforlisten 4136011 00:18:27.185 07:37:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:27.185 07:37:30 -- common/autotest_common.sh@819 -- # '[' -z 4136011 ']' 00:18:27.185 07:37:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.185 07:37:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:27.185 07:37:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.185 07:37:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:27.185 07:37:30 -- common/autotest_common.sh@10 -- # set +x 00:18:27.185 [2024-10-07 07:37:30.993626] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:27.185 [2024-10-07 07:37:30.993668] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.185 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.185 [2024-10-07 07:37:31.052849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:27.185 [2024-10-07 07:37:31.127978] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:27.185 [2024-10-07 07:37:31.128097] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.185 [2024-10-07 07:37:31.128105] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.185 [2024-10-07 07:37:31.128111] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.185 [2024-10-07 07:37:31.128251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:27.185 [2024-10-07 07:37:31.128361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:27.185 [2024-10-07 07:37:31.128467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:27.185 [2024-10-07 07:37:31.128468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:28.118 07:37:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:28.118 07:37:31 -- common/autotest_common.sh@852 -- # return 0 00:18:28.118 07:37:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:28.118 07:37:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:28.118 07:37:31 -- common/autotest_common.sh@10 -- # set +x 00:18:28.118 07:37:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.118 07:37:31 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:28.118 07:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.118 07:37:31 -- common/autotest_common.sh@10 -- # set +x 00:18:28.118 [2024-10-07 07:37:31.847301] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.118 07:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.118 07:37:31 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:28.118 07:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.118 07:37:31 -- common/autotest_common.sh@10 -- # set +x 00:18:28.118 Malloc0 00:18:28.118 07:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.118 07:37:31 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:28.118 07:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.118 07:37:31 -- common/autotest_common.sh@10 -- # set +x 00:18:28.118 07:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.118 07:37:31 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:28.118 07:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.118 07:37:31 -- common/autotest_common.sh@10 -- # set +x 00:18:28.118 07:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.118 07:37:31 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.118 07:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.118 07:37:31 -- common/autotest_common.sh@10 -- # set +x 00:18:28.118 [2024-10-07 07:37:31.902526] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.118 07:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.118 07:37:31 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:28.118 07:37:31 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:28.118 07:37:31 -- nvmf/common.sh@520 -- # config=() 00:18:28.118 07:37:31 -- nvmf/common.sh@520 -- # local subsystem config 00:18:28.118 07:37:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:28.118 07:37:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:28.118 { 00:18:28.118 "params": { 00:18:28.118 "name": "Nvme$subsystem", 00:18:28.118 "trtype": "$TEST_TRANSPORT", 00:18:28.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.118 "adrfam": "ipv4", 00:18:28.118 "trsvcid": "$NVMF_PORT", 00:18:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.118 "hdgst": ${hdgst:-false}, 00:18:28.118 "ddgst": ${ddgst:-false} 00:18:28.118 }, 00:18:28.118 "method": "bdev_nvme_attach_controller" 00:18:28.118 } 00:18:28.118 EOF 00:18:28.118 )") 00:18:28.118 07:37:31 -- nvmf/common.sh@542 -- # cat 00:18:28.118 07:37:31 -- nvmf/common.sh@544 -- # jq . 00:18:28.118 07:37:31 -- nvmf/common.sh@545 -- # IFS=, 00:18:28.118 07:37:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:28.118 "params": { 00:18:28.118 "name": "Nvme1", 00:18:28.118 "trtype": "tcp", 00:18:28.118 "traddr": "10.0.0.2", 00:18:28.118 "adrfam": "ipv4", 00:18:28.118 "trsvcid": "4420", 00:18:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.118 "hdgst": false, 00:18:28.118 "ddgst": false 00:18:28.118 }, 00:18:28.118 "method": "bdev_nvme_attach_controller" 00:18:28.118 }' 00:18:28.118 [2024-10-07 07:37:31.950591] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:28.118 [2024-10-07 07:37:31.950637] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4136254 ] 00:18:28.118 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.118 [2024-10-07 07:37:32.006698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:28.118 [2024-10-07 07:37:32.076998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.118 [2024-10-07 07:37:32.077094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.118 [2024-10-07 07:37:32.077096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.375 [2024-10-07 07:37:32.266021] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:28.375 [2024-10-07 07:37:32.266054] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:28.375 I/O targets: 00:18:28.375 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:28.375 00:18:28.375 00:18:28.375 CUnit - A unit testing framework for C - Version 2.1-3 00:18:28.375 http://cunit.sourceforge.net/ 00:18:28.375 00:18:28.375 00:18:28.375 Suite: bdevio tests on: Nvme1n1 00:18:28.375 Test: blockdev write read block ...passed 00:18:28.631 Test: blockdev write zeroes read block ...passed 00:18:28.631 Test: blockdev write zeroes read no split ...passed 00:18:28.631 Test: blockdev write zeroes read split ...passed 00:18:28.631 Test: blockdev write zeroes read split partial ...passed 00:18:28.631 Test: blockdev reset ...[2024-10-07 07:37:32.474804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:28.631 [2024-10-07 07:37:32.474858] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0f1f0 (9): Bad file descriptor 00:18:28.631 [2024-10-07 07:37:32.486341] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:28.631 passed 00:18:28.631 Test: blockdev write read 8 blocks ...passed 00:18:28.631 Test: blockdev write read size > 128k ...passed 00:18:28.631 Test: blockdev write read invalid size ...passed 00:18:28.631 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:28.631 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:28.631 Test: blockdev write read max offset ...passed 00:18:28.889 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:28.889 Test: blockdev writev readv 8 blocks ...passed 00:18:28.889 Test: blockdev writev readv 30 x 1block ...passed 00:18:28.889 Test: blockdev writev readv block ...passed 00:18:28.889 Test: blockdev writev readv size > 128k ...passed 00:18:28.889 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:28.889 Test: blockdev comparev and writev ...[2024-10-07 07:37:32.740811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.889 [2024-10-07 07:37:32.740840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.889 [2024-10-07 07:37:32.740854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.889 [2024-10-07 07:37:32.740862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:28.889 [2024-10-07 07:37:32.741153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.889 [2024-10-07 07:37:32.741163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:28.889 [2024-10-07 07:37:32.741174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.889 [2024-10-07 07:37:32.741180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:28.889 [2024-10-07 07:37:32.741464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.889 [2024-10-07 07:37:32.741473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:28.889 [2024-10-07 07:37:32.741484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.889 [2024-10-07 07:37:32.741491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:28.889 [2024-10-07 07:37:32.741772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.889 [2024-10-07 07:37:32.741782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:28.889 [2024-10-07 07:37:32.741793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.889 [2024-10-07 07:37:32.741800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:28.889 passed 00:18:28.889 Test: blockdev nvme passthru rw ...passed 00:18:28.889 Test: blockdev nvme passthru vendor specific ...[2024-10-07 07:37:32.825422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.889 [2024-10-07 07:37:32.825437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:28.889 [2024-10-07 07:37:32.825593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.889 [2024-10-07 07:37:32.825603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:28.889 [2024-10-07 07:37:32.825762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.889 [2024-10-07 07:37:32.825771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:28.889 [2024-10-07 07:37:32.825925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.889 [2024-10-07 07:37:32.825935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:28.889 passed 00:18:28.889 Test: blockdev nvme admin passthru ...passed 00:18:29.147 Test: blockdev copy ...passed 00:18:29.147 00:18:29.147 Run Summary: Type Total Ran Passed Failed Inactive 00:18:29.147 suites 1 1 n/a 0 0 00:18:29.147 tests 23 23 23 0 0 00:18:29.147 asserts 152 152 152 0 n/a 00:18:29.147 00:18:29.147 Elapsed time = 1.243 seconds 00:18:29.147 07:37:33 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:29.147 07:37:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:29.147 07:37:33 -- common/autotest_common.sh@10 -- # set +x 00:18:29.147 07:37:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:29.147 07:37:33 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:29.147 07:37:33 -- target/bdevio.sh@30 -- # nvmftestfini 00:18:29.147 07:37:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:29.147 07:37:33 -- nvmf/common.sh@116 -- # sync 00:18:29.147 07:37:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:29.147 07:37:33 -- nvmf/common.sh@119 -- # set +e 00:18:29.147 07:37:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:29.147 07:37:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:29.147 rmmod nvme_tcp 00:18:29.147 rmmod nvme_fabrics 00:18:29.147 rmmod nvme_keyring 00:18:29.405 07:37:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:29.405 07:37:33 -- nvmf/common.sh@123 -- # set -e 00:18:29.405 07:37:33 -- nvmf/common.sh@124 -- # return 0 00:18:29.405 07:37:33 -- nvmf/common.sh@477 -- # '[' -n 4136011 ']' 00:18:29.405 07:37:33 -- nvmf/common.sh@478 -- # killprocess 4136011 00:18:29.405 07:37:33 -- common/autotest_common.sh@926 -- # '[' -z 4136011 ']' 00:18:29.405 07:37:33 -- common/autotest_common.sh@930 -- # kill -0 4136011 00:18:29.405 07:37:33 -- common/autotest_common.sh@931 -- # uname 00:18:29.405 07:37:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:29.405 07:37:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4136011 00:18:29.405 07:37:33 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:18:29.405 07:37:33 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:18:29.405 07:37:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4136011' 00:18:29.405 killing process with pid 4136011 00:18:29.405 07:37:33 -- common/autotest_common.sh@945 -- # kill 4136011 00:18:29.405 07:37:33 -- common/autotest_common.sh@950 -- # wait 4136011 00:18:29.663 07:37:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:29.663 07:37:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:29.663 07:37:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:29.663 07:37:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:29.663 07:37:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:29.663 07:37:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.663 07:37:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.663 07:37:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.565 07:37:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:31.565 00:18:31.565 real 0m10.114s 00:18:31.565 user 0m12.871s 00:18:31.565 sys 0m4.556s 00:18:31.565 07:37:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.565 07:37:35 -- common/autotest_common.sh@10 -- # set +x 00:18:31.566 ************************************ 00:18:31.566 END TEST nvmf_bdevio 00:18:31.566 ************************************ 00:18:31.566 07:37:35 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:18:31.566 07:37:35 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:31.566 07:37:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:31.566 07:37:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:31.566 07:37:35 -- common/autotest_common.sh@10 -- # set +x 00:18:31.566 ************************************ 00:18:31.566 START TEST nvmf_bdevio_no_huge 00:18:31.566 ************************************ 00:18:31.566 07:37:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:31.824 * Looking for test storage... 00:18:31.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.824 07:37:35 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.824 07:37:35 -- nvmf/common.sh@7 -- # uname -s 00:18:31.824 07:37:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.824 07:37:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.824 07:37:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.824 07:37:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.824 07:37:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.824 07:37:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.824 07:37:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.824 07:37:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.824 07:37:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.824 07:37:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.824 07:37:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:31.824 07:37:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:31.824 07:37:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.824 07:37:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.824 07:37:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.824 07:37:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.824 07:37:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.824 07:37:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.824 07:37:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.824 07:37:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.824 07:37:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.824 07:37:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.824 07:37:35 -- paths/export.sh@5 -- # export PATH 00:18:31.824 07:37:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.824 07:37:35 -- nvmf/common.sh@46 -- # : 0 00:18:31.824 07:37:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:31.824 07:37:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:31.824 07:37:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:31.824 07:37:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.824 07:37:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.824 07:37:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:31.824 07:37:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:31.824 07:37:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:31.824 07:37:35 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:31.824 07:37:35 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:31.824 07:37:35 -- target/bdevio.sh@14 -- # nvmftestinit 00:18:31.824 07:37:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:31.824 07:37:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.824 07:37:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:31.824 07:37:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:31.824 07:37:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:31.824 07:37:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.824 07:37:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.824 07:37:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.824 07:37:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:31.824 07:37:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:31.824 07:37:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:31.824 07:37:35 -- common/autotest_common.sh@10 -- # set +x 00:18:37.145 07:37:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:37.145 07:37:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:37.145 07:37:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:37.145 07:37:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:37.145 07:37:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:37.145 07:37:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:37.145 07:37:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:37.145 07:37:40 -- nvmf/common.sh@294 -- # net_devs=() 00:18:37.145 07:37:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:37.145 07:37:40 -- nvmf/common.sh@295 -- # e810=() 00:18:37.145 07:37:40 -- nvmf/common.sh@295 -- # local -ga e810 00:18:37.145 07:37:40 -- nvmf/common.sh@296 -- # x722=() 00:18:37.145 07:37:40 -- nvmf/common.sh@296 -- # local -ga x722 00:18:37.145 07:37:40 -- nvmf/common.sh@297 -- # mlx=() 00:18:37.145 07:37:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:37.145 07:37:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:37.145 07:37:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:37.145 07:37:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:37.145 07:37:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:37.145 07:37:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:37.145 07:37:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:37.145 07:37:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:37.145 07:37:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:37.145 07:37:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:37.145 07:37:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:37.145 07:37:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:37.145 07:37:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:37.145 07:37:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:37.145 07:37:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:37.145 07:37:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:37.145 07:37:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:37.145 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:37.145 07:37:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:37.145 07:37:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:37.145 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:37.145 07:37:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:37.145 07:37:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:37.145 07:37:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.145 07:37:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:37.145 07:37:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.145 07:37:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:37.145 Found net devices under 0000:af:00.0: cvl_0_0 00:18:37.145 07:37:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.145 07:37:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:37.145 07:37:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.145 07:37:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:37.145 07:37:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.145 07:37:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:37.145 Found net devices under 0000:af:00.1: cvl_0_1 00:18:37.145 07:37:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.145 07:37:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:37.145 07:37:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:37.145 07:37:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:37.145 07:37:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:37.145 07:37:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.145 07:37:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.145 07:37:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:37.145 07:37:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:37.145 07:37:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:37.145 07:37:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:37.145 07:37:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:37.145 07:37:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:37.145 07:37:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.145 07:37:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:37.145 07:37:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:37.145 07:37:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:37.145 07:37:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:37.145 07:37:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:37.145 07:37:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:37.145 07:37:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:37.145 07:37:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:37.145 07:37:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:37.145 07:37:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:37.145 07:37:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:37.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:18:37.145 00:18:37.145 --- 10.0.0.2 ping statistics --- 00:18:37.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.145 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:18:37.145 07:37:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:37.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:18:37.145 00:18:37.145 --- 10.0.0.1 ping statistics --- 00:18:37.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.145 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:18:37.146 07:37:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.146 07:37:41 -- nvmf/common.sh@410 -- # return 0 00:18:37.146 07:37:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:37.146 07:37:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.146 07:37:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:37.146 07:37:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:37.146 07:37:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.146 07:37:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:37.146 07:37:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:37.146 07:37:41 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:37.146 07:37:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:37.146 07:37:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:37.146 07:37:41 -- common/autotest_common.sh@10 -- # set +x 00:18:37.403 07:37:41 -- nvmf/common.sh@469 -- # nvmfpid=4139950 00:18:37.403 07:37:41 -- nvmf/common.sh@470 -- # waitforlisten 4139950 00:18:37.403 07:37:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:37.403 07:37:41 -- common/autotest_common.sh@819 -- # '[' -z 4139950 ']' 00:18:37.403 07:37:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.403 07:37:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:37.403 07:37:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.403 07:37:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:37.403 07:37:41 -- common/autotest_common.sh@10 -- # set +x 00:18:37.403 [2024-10-07 07:37:41.162857] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:37.403 [2024-10-07 07:37:41.162905] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:37.403 [2024-10-07 07:37:41.232919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:37.403 [2024-10-07 07:37:41.315207] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:37.403 [2024-10-07 07:37:41.315309] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.403 [2024-10-07 07:37:41.315317] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.403 [2024-10-07 07:37:41.315323] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.403 [2024-10-07 07:37:41.315421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:37.403 [2024-10-07 07:37:41.315518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:37.403 [2024-10-07 07:37:41.315626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:37.403 [2024-10-07 07:37:41.315627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:38.335 07:37:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:38.335 07:37:41 -- common/autotest_common.sh@852 -- # return 0 00:18:38.335 07:37:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:38.335 07:37:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:38.335 07:37:41 -- common/autotest_common.sh@10 -- # set +x 00:18:38.335 07:37:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.335 07:37:42 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:38.335 07:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.335 07:37:42 -- common/autotest_common.sh@10 -- # set +x 00:18:38.335 [2024-10-07 07:37:42.027407] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.335 07:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.335 07:37:42 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:38.335 07:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.335 07:37:42 -- common/autotest_common.sh@10 -- # set +x 00:18:38.335 Malloc0 00:18:38.335 07:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.335 07:37:42 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:38.335 07:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.335 07:37:42 -- common/autotest_common.sh@10 -- # set +x 00:18:38.335 07:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.335 07:37:42 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:38.335 07:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.335 07:37:42 -- common/autotest_common.sh@10 -- # set +x 00:18:38.335 07:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.335 07:37:42 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.335 07:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.335 07:37:42 -- common/autotest_common.sh@10 -- # set +x 00:18:38.335 [2024-10-07 07:37:42.075725] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.335 07:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.335 07:37:42 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:38.335 07:37:42 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:38.335 07:37:42 -- nvmf/common.sh@520 -- # config=() 00:18:38.335 07:37:42 -- nvmf/common.sh@520 -- # local subsystem config 00:18:38.335 07:37:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:38.335 07:37:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:38.335 { 00:18:38.335 "params": { 00:18:38.335 "name": "Nvme$subsystem", 00:18:38.335 "trtype": "$TEST_TRANSPORT", 00:18:38.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.335 "adrfam": "ipv4", 00:18:38.335 "trsvcid": "$NVMF_PORT", 00:18:38.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.335 "hdgst": ${hdgst:-false}, 00:18:38.335 "ddgst": ${ddgst:-false} 00:18:38.335 }, 00:18:38.335 "method": "bdev_nvme_attach_controller" 00:18:38.335 } 00:18:38.335 EOF 00:18:38.335 )") 00:18:38.335 07:37:42 -- nvmf/common.sh@542 -- # cat 00:18:38.335 07:37:42 -- nvmf/common.sh@544 -- # jq . 00:18:38.335 07:37:42 -- nvmf/common.sh@545 -- # IFS=, 00:18:38.335 07:37:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:38.335 "params": { 00:18:38.335 "name": "Nvme1", 00:18:38.335 "trtype": "tcp", 00:18:38.335 "traddr": "10.0.0.2", 00:18:38.335 "adrfam": "ipv4", 00:18:38.335 "trsvcid": "4420", 00:18:38.335 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.335 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.335 "hdgst": false, 00:18:38.335 "ddgst": false 00:18:38.335 }, 00:18:38.335 "method": "bdev_nvme_attach_controller" 00:18:38.335 }' 00:18:38.335 [2024-10-07 07:37:42.120733] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:38.335 [2024-10-07 07:37:42.120775] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4140002 ] 00:18:38.335 [2024-10-07 07:37:42.180284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:38.335 [2024-10-07 07:37:42.264082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.335 [2024-10-07 07:37:42.264102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.335 [2024-10-07 07:37:42.264105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.592 [2024-10-07 07:37:42.560686] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:38.592 [2024-10-07 07:37:42.560714] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:38.849 I/O targets: 00:18:38.849 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:38.849 00:18:38.849 00:18:38.849 CUnit - A unit testing framework for C - Version 2.1-3 00:18:38.849 http://cunit.sourceforge.net/ 00:18:38.849 00:18:38.849 00:18:38.849 Suite: bdevio tests on: Nvme1n1 00:18:38.849 Test: blockdev write read block ...passed 00:18:38.849 Test: blockdev write zeroes read block ...passed 00:18:38.849 Test: blockdev write zeroes read no split ...passed 00:18:38.849 Test: blockdev write zeroes read split ...passed 00:18:38.849 Test: blockdev write zeroes read split partial ...passed 00:18:38.849 Test: blockdev reset ...[2024-10-07 07:37:42.773433] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:38.849 [2024-10-07 07:37:42.773482] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1555240 (9): Bad file descriptor 00:18:38.849 [2024-10-07 07:37:42.790074] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:38.849 passed 00:18:39.107 Test: blockdev write read 8 blocks ...passed 00:18:39.107 Test: blockdev write read size > 128k ...passed 00:18:39.107 Test: blockdev write read invalid size ...passed 00:18:39.107 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:39.107 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:39.107 Test: blockdev write read max offset ...passed 00:18:39.107 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:39.107 Test: blockdev writev readv 8 blocks ...passed 00:18:39.107 Test: blockdev writev readv 30 x 1block ...passed 00:18:39.107 Test: blockdev writev readv block ...passed 00:18:39.107 Test: blockdev writev readv size > 128k ...passed 00:18:39.107 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:39.107 Test: blockdev comparev and writev ...[2024-10-07 07:37:43.007828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.107 [2024-10-07 07:37:43.007856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.107 [2024-10-07 07:37:43.007873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.107 [2024-10-07 07:37:43.007881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.107 [2024-10-07 07:37:43.008165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.107 [2024-10-07 07:37:43.008175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:39.107 [2024-10-07 07:37:43.008186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.107 [2024-10-07 07:37:43.008193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:39.107 [2024-10-07 07:37:43.008484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.107 [2024-10-07 07:37:43.008493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:39.107 [2024-10-07 07:37:43.008504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.107 [2024-10-07 07:37:43.008511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:39.107 [2024-10-07 07:37:43.008800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.107 [2024-10-07 07:37:43.008810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:39.107 [2024-10-07 07:37:43.008821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.107 [2024-10-07 07:37:43.008828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:39.107 passed 00:18:39.364 Test: blockdev nvme passthru rw ...passed 00:18:39.364 Test: blockdev nvme passthru vendor specific ...[2024-10-07 07:37:43.092428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.364 [2024-10-07 07:37:43.092450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:39.364 [2024-10-07 07:37:43.092603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.364 [2024-10-07 07:37:43.092613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:39.364 [2024-10-07 07:37:43.092770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.364 [2024-10-07 07:37:43.092779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:39.364 [2024-10-07 07:37:43.092939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.364 [2024-10-07 07:37:43.092949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:39.364 passed 00:18:39.364 Test: blockdev nvme admin passthru ...passed 00:18:39.364 Test: blockdev copy ...passed 00:18:39.364 00:18:39.364 Run Summary: Type Total Ran Passed Failed Inactive 00:18:39.364 suites 1 1 n/a 0 0 00:18:39.364 tests 23 23 23 0 0 00:18:39.364 asserts 152 152 152 0 n/a 00:18:39.364 00:18:39.364 Elapsed time = 1.171 seconds 00:18:39.620 07:37:43 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:39.620 07:37:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.620 07:37:43 -- common/autotest_common.sh@10 -- # set +x 00:18:39.620 07:37:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.620 07:37:43 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:39.620 07:37:43 -- target/bdevio.sh@30 -- # nvmftestfini 00:18:39.620 07:37:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:39.620 07:37:43 -- nvmf/common.sh@116 -- # sync 00:18:39.620 07:37:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:39.620 07:37:43 -- nvmf/common.sh@119 -- # set +e 00:18:39.620 07:37:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:39.620 07:37:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:39.620 rmmod nvme_tcp 00:18:39.620 rmmod nvme_fabrics 00:18:39.620 rmmod nvme_keyring 00:18:39.620 07:37:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:39.620 07:37:43 -- nvmf/common.sh@123 -- # set -e 00:18:39.620 07:37:43 -- nvmf/common.sh@124 -- # return 0 00:18:39.620 07:37:43 -- nvmf/common.sh@477 -- # '[' -n 4139950 ']' 00:18:39.620 07:37:43 -- nvmf/common.sh@478 -- # killprocess 4139950 00:18:39.620 07:37:43 -- common/autotest_common.sh@926 -- # '[' -z 4139950 ']' 00:18:39.620 07:37:43 -- common/autotest_common.sh@930 -- # kill -0 4139950 00:18:39.620 07:37:43 -- common/autotest_common.sh@931 -- # uname 00:18:39.620 07:37:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:39.620 07:37:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4139950 00:18:39.620 07:37:43 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:18:39.620 07:37:43 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:18:39.620 07:37:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4139950' 00:18:39.620 killing process with pid 4139950 00:18:39.620 07:37:43 -- common/autotest_common.sh@945 -- # kill 4139950 00:18:39.620 07:37:43 -- common/autotest_common.sh@950 -- # wait 4139950 00:18:40.187 07:37:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:40.187 07:37:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:40.187 07:37:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:40.187 07:37:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.187 07:37:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:40.187 07:37:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.187 07:37:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.187 07:37:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.087 07:37:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:42.087 00:18:42.087 real 0m10.461s 00:18:42.087 user 0m14.171s 00:18:42.087 sys 0m4.957s 00:18:42.087 07:37:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:42.087 07:37:45 -- common/autotest_common.sh@10 -- # set +x 00:18:42.087 ************************************ 00:18:42.087 END TEST nvmf_bdevio_no_huge 00:18:42.087 ************************************ 00:18:42.087 07:37:46 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:42.087 07:37:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:42.087 07:37:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:42.087 07:37:46 -- common/autotest_common.sh@10 -- # set +x 00:18:42.087 ************************************ 00:18:42.087 START TEST nvmf_tls 00:18:42.087 ************************************ 00:18:42.087 07:37:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:42.344 * Looking for test storage... 00:18:42.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:42.344 07:37:46 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.344 07:37:46 -- nvmf/common.sh@7 -- # uname -s 00:18:42.344 07:37:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.344 07:37:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.344 07:37:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.344 07:37:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.344 07:37:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.344 07:37:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.344 07:37:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.344 07:37:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.344 07:37:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.344 07:37:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.344 07:37:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:42.344 07:37:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:42.344 07:37:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.344 07:37:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.344 07:37:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.344 07:37:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.344 07:37:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.344 07:37:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.344 07:37:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.344 07:37:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.344 07:37:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.345 07:37:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.345 07:37:46 -- paths/export.sh@5 -- # export PATH 00:18:42.345 07:37:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.345 07:37:46 -- nvmf/common.sh@46 -- # : 0 00:18:42.345 07:37:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:42.345 07:37:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:42.345 07:37:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:42.345 07:37:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.345 07:37:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.345 07:37:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:42.345 07:37:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:42.345 07:37:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:42.345 07:37:46 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.345 07:37:46 -- target/tls.sh@71 -- # nvmftestinit 00:18:42.345 07:37:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:42.345 07:37:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.345 07:37:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:42.345 07:37:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:42.345 07:37:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:42.345 07:37:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.345 07:37:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.345 07:37:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.345 07:37:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:42.345 07:37:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:42.345 07:37:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:42.345 07:37:46 -- common/autotest_common.sh@10 -- # set +x 00:18:47.612 07:37:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:47.612 07:37:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:47.612 07:37:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:47.612 07:37:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:47.612 07:37:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:47.612 07:37:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:47.612 07:37:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:47.612 07:37:51 -- nvmf/common.sh@294 -- # net_devs=() 00:18:47.612 07:37:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:47.612 07:37:51 -- nvmf/common.sh@295 -- # e810=() 00:18:47.612 07:37:51 -- nvmf/common.sh@295 -- # local -ga e810 00:18:47.612 07:37:51 -- nvmf/common.sh@296 -- # x722=() 00:18:47.612 07:37:51 -- nvmf/common.sh@296 -- # local -ga x722 00:18:47.612 07:37:51 -- nvmf/common.sh@297 -- # mlx=() 00:18:47.612 07:37:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:47.612 07:37:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.612 07:37:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.612 07:37:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.612 07:37:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.613 07:37:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.613 07:37:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.613 07:37:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.613 07:37:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.613 07:37:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.613 07:37:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.613 07:37:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.613 07:37:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:47.613 07:37:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:47.613 07:37:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:47.613 07:37:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:47.613 07:37:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:47.613 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:47.613 07:37:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:47.613 07:37:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:47.613 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:47.613 07:37:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:47.613 07:37:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:47.613 07:37:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.613 07:37:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:47.613 07:37:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.613 07:37:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:47.613 Found net devices under 0000:af:00.0: cvl_0_0 00:18:47.613 07:37:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.613 07:37:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:47.613 07:37:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.613 07:37:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:47.613 07:37:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.613 07:37:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:47.613 Found net devices under 0000:af:00.1: cvl_0_1 00:18:47.613 07:37:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.613 07:37:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:47.613 07:37:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:47.613 07:37:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:47.613 07:37:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.613 07:37:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.613 07:37:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.613 07:37:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:47.613 07:37:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.613 07:37:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.613 07:37:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:47.613 07:37:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.613 07:37:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.613 07:37:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:47.613 07:37:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:47.613 07:37:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.613 07:37:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.613 07:37:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.613 07:37:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.613 07:37:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:47.613 07:37:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.613 07:37:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.613 07:37:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.613 07:37:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:47.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:18:47.613 00:18:47.613 --- 10.0.0.2 ping statistics --- 00:18:47.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.613 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:18:47.613 07:37:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:18:47.613 00:18:47.613 --- 10.0.0.1 ping statistics --- 00:18:47.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.613 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:18:47.613 07:37:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.613 07:37:51 -- nvmf/common.sh@410 -- # return 0 00:18:47.613 07:37:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:47.613 07:37:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.613 07:37:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:47.613 07:37:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.613 07:37:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:47.613 07:37:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:47.613 07:37:51 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:47.613 07:37:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:47.613 07:37:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:47.613 07:37:51 -- common/autotest_common.sh@10 -- # set +x 00:18:47.613 07:37:51 -- nvmf/common.sh@469 -- # nvmfpid=4143688 00:18:47.613 07:37:51 -- nvmf/common.sh@470 -- # waitforlisten 4143688 00:18:47.613 07:37:51 -- common/autotest_common.sh@819 -- # '[' -z 4143688 ']' 00:18:47.613 07:37:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.613 07:37:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:47.613 07:37:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.613 07:37:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:47.613 07:37:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:47.613 07:37:51 -- common/autotest_common.sh@10 -- # set +x 00:18:47.613 [2024-10-07 07:37:51.567516] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:47.613 [2024-10-07 07:37:51.567561] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.872 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.872 [2024-10-07 07:37:51.628178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.872 [2024-10-07 07:37:51.703685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:47.872 [2024-10-07 07:37:51.703789] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.872 [2024-10-07 07:37:51.703797] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.872 [2024-10-07 07:37:51.703803] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.872 [2024-10-07 07:37:51.703819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.438 07:37:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:48.438 07:37:52 -- common/autotest_common.sh@852 -- # return 0 00:18:48.438 07:37:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:48.438 07:37:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:48.438 07:37:52 -- common/autotest_common.sh@10 -- # set +x 00:18:48.438 07:37:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.696 07:37:52 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:18:48.696 07:37:52 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:48.696 true 00:18:48.697 07:37:52 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:48.697 07:37:52 -- target/tls.sh@82 -- # jq -r .tls_version 00:18:48.955 07:37:52 -- target/tls.sh@82 -- # version=0 00:18:48.955 07:37:52 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:18:48.955 07:37:52 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:49.214 07:37:52 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.214 07:37:52 -- target/tls.sh@90 -- # jq -r .tls_version 00:18:49.214 07:37:53 -- target/tls.sh@90 -- # version=13 00:18:49.214 07:37:53 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:18:49.214 07:37:53 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:49.472 07:37:53 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.472 07:37:53 -- target/tls.sh@98 -- # jq -r .tls_version 00:18:49.730 07:37:53 -- target/tls.sh@98 -- # version=7 00:18:49.730 07:37:53 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:18:49.730 07:37:53 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.730 07:37:53 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:49.730 07:37:53 -- target/tls.sh@105 -- # ktls=false 00:18:49.730 07:37:53 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:18:49.730 07:37:53 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:49.989 07:37:53 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.989 07:37:53 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:50.247 07:37:53 -- target/tls.sh@113 -- # ktls=true 00:18:50.247 07:37:53 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:18:50.247 07:37:53 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:50.247 07:37:54 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:50.247 07:37:54 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:18:50.505 07:37:54 -- target/tls.sh@121 -- # ktls=false 00:18:50.505 07:37:54 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:18:50.505 07:37:54 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:18:50.505 07:37:54 -- target/tls.sh@49 -- # local key hash crc 00:18:50.505 07:37:54 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:18:50.505 07:37:54 -- target/tls.sh@51 -- # hash=01 00:18:50.505 07:37:54 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:18:50.505 07:37:54 -- target/tls.sh@52 -- # gzip -1 -c 00:18:50.505 07:37:54 -- target/tls.sh@52 -- # tail -c8 00:18:50.505 07:37:54 -- target/tls.sh@52 -- # head -c 4 00:18:50.505 07:37:54 -- target/tls.sh@52 -- # crc='p$H�' 00:18:50.505 07:37:54 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:18:50.505 07:37:54 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:18:50.505 07:37:54 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:50.505 07:37:54 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:50.505 07:37:54 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:18:50.505 07:37:54 -- target/tls.sh@49 -- # local key hash crc 00:18:50.505 07:37:54 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:18:50.505 07:37:54 -- target/tls.sh@51 -- # hash=01 00:18:50.505 07:37:54 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:18:50.505 07:37:54 -- target/tls.sh@52 -- # gzip -1 -c 00:18:50.505 07:37:54 -- target/tls.sh@52 -- # tail -c8 00:18:50.505 07:37:54 -- target/tls.sh@52 -- # head -c 4 00:18:50.505 07:37:54 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:18:50.505 07:37:54 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:18:50.505 07:37:54 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:18:50.505 07:37:54 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:50.505 07:37:54 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:50.506 07:37:54 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:50.506 07:37:54 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:50.506 07:37:54 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:50.506 07:37:54 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:50.506 07:37:54 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:50.506 07:37:54 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:50.506 07:37:54 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:50.762 07:37:54 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:51.019 07:37:54 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:51.019 07:37:54 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:51.019 07:37:54 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:51.019 [2024-10-07 07:37:54.963975] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.019 07:37:54 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:51.276 07:37:55 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:51.533 [2024-10-07 07:37:55.304852] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.533 [2024-10-07 07:37:55.305085] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.533 07:37:55 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:51.533 malloc0 00:18:51.533 07:37:55 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:51.791 07:37:55 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:52.048 07:37:55 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:52.048 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.012 Initializing NVMe Controllers 00:19:02.012 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:02.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:02.012 Initialization complete. Launching workers. 00:19:02.012 ======================================================== 00:19:02.012 Latency(us) 00:19:02.012 Device Information : IOPS MiB/s Average min max 00:19:02.012 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17549.55 68.55 3647.18 778.65 5728.31 00:19:02.012 ======================================================== 00:19:02.012 Total : 17549.55 68.55 3647.18 778.65 5728.31 00:19:02.012 00:19:02.012 07:38:05 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:02.012 07:38:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:02.012 07:38:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:02.012 07:38:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:02.012 07:38:05 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:02.012 07:38:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:02.012 07:38:05 -- target/tls.sh@28 -- # bdevperf_pid=4146205 00:19:02.012 07:38:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.012 07:38:05 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:02.012 07:38:05 -- target/tls.sh@31 -- # waitforlisten 4146205 /var/tmp/bdevperf.sock 00:19:02.012 07:38:05 -- common/autotest_common.sh@819 -- # '[' -z 4146205 ']' 00:19:02.012 07:38:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.012 07:38:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:02.012 07:38:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.012 07:38:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:02.012 07:38:05 -- common/autotest_common.sh@10 -- # set +x 00:19:02.012 [2024-10-07 07:38:05.976311] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:02.012 [2024-10-07 07:38:05.976361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4146205 ] 00:19:02.270 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.270 [2024-10-07 07:38:06.028392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.270 [2024-10-07 07:38:06.096005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.842 07:38:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:02.842 07:38:06 -- common/autotest_common.sh@852 -- # return 0 00:19:02.842 07:38:06 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:03.113 [2024-10-07 07:38:06.937973] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:03.113 TLSTESTn1 00:19:03.113 07:38:07 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:03.386 Running I/O for 10 seconds... 00:19:13.375 00:19:13.375 Latency(us) 00:19:13.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.375 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:13.375 Verification LBA range: start 0x0 length 0x2000 00:19:13.375 TLSTESTn1 : 10.02 4402.61 17.20 0.00 0.00 29039.36 3963.37 49432.87 00:19:13.375 =================================================================================================================== 00:19:13.375 Total : 4402.61 17.20 0.00 0.00 29039.36 3963.37 49432.87 00:19:13.375 0 00:19:13.375 07:38:17 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:13.375 07:38:17 -- target/tls.sh@45 -- # killprocess 4146205 00:19:13.375 07:38:17 -- common/autotest_common.sh@926 -- # '[' -z 4146205 ']' 00:19:13.375 07:38:17 -- common/autotest_common.sh@930 -- # kill -0 4146205 00:19:13.375 07:38:17 -- common/autotest_common.sh@931 -- # uname 00:19:13.375 07:38:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:13.375 07:38:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4146205 00:19:13.375 07:38:17 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:13.375 07:38:17 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:13.375 07:38:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4146205' 00:19:13.375 killing process with pid 4146205 00:19:13.375 07:38:17 -- common/autotest_common.sh@945 -- # kill 4146205 00:19:13.375 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.375 00:19:13.375 Latency(us) 00:19:13.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.375 =================================================================================================================== 00:19:13.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:13.375 07:38:17 -- common/autotest_common.sh@950 -- # wait 4146205 00:19:13.632 07:38:17 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:13.632 07:38:17 -- common/autotest_common.sh@640 -- # local es=0 00:19:13.632 07:38:17 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:13.632 07:38:17 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:13.632 07:38:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:13.632 07:38:17 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:13.632 07:38:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:13.632 07:38:17 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:13.632 07:38:17 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:13.632 07:38:17 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:13.632 07:38:17 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:13.632 07:38:17 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:19:13.632 07:38:17 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.632 07:38:17 -- target/tls.sh@28 -- # bdevperf_pid=4148075 00:19:13.632 07:38:17 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:13.632 07:38:17 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.632 07:38:17 -- target/tls.sh@31 -- # waitforlisten 4148075 /var/tmp/bdevperf.sock 00:19:13.632 07:38:17 -- common/autotest_common.sh@819 -- # '[' -z 4148075 ']' 00:19:13.632 07:38:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.632 07:38:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:13.632 07:38:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.632 07:38:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:13.632 07:38:17 -- common/autotest_common.sh@10 -- # set +x 00:19:13.632 [2024-10-07 07:38:17.479969] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:13.632 [2024-10-07 07:38:17.480019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4148075 ] 00:19:13.632 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.632 [2024-10-07 07:38:17.530077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.632 [2024-10-07 07:38:17.595552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.560 07:38:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:14.561 07:38:18 -- common/autotest_common.sh@852 -- # return 0 00:19:14.561 07:38:18 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:14.561 [2024-10-07 07:38:18.461428] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.561 [2024-10-07 07:38:18.465834] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:14.561 [2024-10-07 07:38:18.466541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c47c0 (107): Transport endpoint is not connected 00:19:14.561 [2024-10-07 07:38:18.467534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c47c0 (9): Bad file descriptor 00:19:14.561 [2024-10-07 07:38:18.468534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:14.561 [2024-10-07 07:38:18.468543] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:14.561 [2024-10-07 07:38:18.468549] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:14.561 request: 00:19:14.561 { 00:19:14.561 "name": "TLSTEST", 00:19:14.561 "trtype": "tcp", 00:19:14.561 "traddr": "10.0.0.2", 00:19:14.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.561 "adrfam": "ipv4", 00:19:14.561 "trsvcid": "4420", 00:19:14.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.561 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:19:14.561 "method": "bdev_nvme_attach_controller", 00:19:14.561 "req_id": 1 00:19:14.561 } 00:19:14.561 Got JSON-RPC error response 00:19:14.561 response: 00:19:14.561 { 00:19:14.561 "code": -32602, 00:19:14.561 "message": "Invalid parameters" 00:19:14.561 } 00:19:14.561 07:38:18 -- target/tls.sh@36 -- # killprocess 4148075 00:19:14.561 07:38:18 -- common/autotest_common.sh@926 -- # '[' -z 4148075 ']' 00:19:14.561 07:38:18 -- common/autotest_common.sh@930 -- # kill -0 4148075 00:19:14.561 07:38:18 -- common/autotest_common.sh@931 -- # uname 00:19:14.561 07:38:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:14.561 07:38:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4148075 00:19:14.818 07:38:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:14.818 07:38:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:14.818 07:38:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4148075' 00:19:14.818 killing process with pid 4148075 00:19:14.818 07:38:18 -- common/autotest_common.sh@945 -- # kill 4148075 00:19:14.818 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.818 00:19:14.818 Latency(us) 00:19:14.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.818 =================================================================================================================== 00:19:14.818 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:14.818 07:38:18 -- common/autotest_common.sh@950 -- # wait 4148075 00:19:14.818 07:38:18 -- target/tls.sh@37 -- # return 1 00:19:14.818 07:38:18 -- common/autotest_common.sh@643 -- # es=1 00:19:14.818 07:38:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:14.818 07:38:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:14.818 07:38:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:14.818 07:38:18 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:14.818 07:38:18 -- common/autotest_common.sh@640 -- # local es=0 00:19:14.818 07:38:18 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:14.818 07:38:18 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:14.818 07:38:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:14.818 07:38:18 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:14.818 07:38:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:14.818 07:38:18 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:14.818 07:38:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.818 07:38:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.818 07:38:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:14.818 07:38:18 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:14.818 07:38:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.818 07:38:18 -- target/tls.sh@28 -- # bdevperf_pid=4148306 00:19:14.818 07:38:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.818 07:38:18 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.818 07:38:18 -- target/tls.sh@31 -- # waitforlisten 4148306 /var/tmp/bdevperf.sock 00:19:14.818 07:38:18 -- common/autotest_common.sh@819 -- # '[' -z 4148306 ']' 00:19:14.818 07:38:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.818 07:38:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:14.818 07:38:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.818 07:38:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:14.818 07:38:18 -- common/autotest_common.sh@10 -- # set +x 00:19:14.818 [2024-10-07 07:38:18.781561] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:14.818 [2024-10-07 07:38:18.781609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4148306 ] 00:19:15.076 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.076 [2024-10-07 07:38:18.831502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.076 [2024-10-07 07:38:18.896218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.640 07:38:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:15.640 07:38:19 -- common/autotest_common.sh@852 -- # return 0 00:19:15.640 07:38:19 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:15.897 [2024-10-07 07:38:19.758171] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.897 [2024-10-07 07:38:19.767476] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:15.897 [2024-10-07 07:38:19.767497] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:15.897 [2024-10-07 07:38:19.767521] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:15.897 [2024-10-07 07:38:19.768282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f17c0 (107): Transport endpoint is not connected 00:19:15.897 [2024-10-07 07:38:19.769276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f17c0 (9): Bad file descriptor 00:19:15.897 [2024-10-07 07:38:19.770276] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:15.897 [2024-10-07 07:38:19.770286] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:15.897 [2024-10-07 07:38:19.770293] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:15.897 request: 00:19:15.897 { 00:19:15.897 "name": "TLSTEST", 00:19:15.897 "trtype": "tcp", 00:19:15.897 "traddr": "10.0.0.2", 00:19:15.897 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:15.897 "adrfam": "ipv4", 00:19:15.897 "trsvcid": "4420", 00:19:15.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.897 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:19:15.897 "method": "bdev_nvme_attach_controller", 00:19:15.897 "req_id": 1 00:19:15.897 } 00:19:15.897 Got JSON-RPC error response 00:19:15.897 response: 00:19:15.897 { 00:19:15.897 "code": -32602, 00:19:15.897 "message": "Invalid parameters" 00:19:15.897 } 00:19:15.897 07:38:19 -- target/tls.sh@36 -- # killprocess 4148306 00:19:15.897 07:38:19 -- common/autotest_common.sh@926 -- # '[' -z 4148306 ']' 00:19:15.897 07:38:19 -- common/autotest_common.sh@930 -- # kill -0 4148306 00:19:15.897 07:38:19 -- common/autotest_common.sh@931 -- # uname 00:19:15.897 07:38:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:15.897 07:38:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4148306 00:19:15.898 07:38:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:15.898 07:38:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:15.898 07:38:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4148306' 00:19:15.898 killing process with pid 4148306 00:19:15.898 07:38:19 -- common/autotest_common.sh@945 -- # kill 4148306 00:19:15.898 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.898 00:19:15.898 Latency(us) 00:19:15.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.898 =================================================================================================================== 00:19:15.898 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:15.898 07:38:19 -- common/autotest_common.sh@950 -- # wait 4148306 00:19:16.155 07:38:20 -- target/tls.sh@37 -- # return 1 00:19:16.155 07:38:20 -- common/autotest_common.sh@643 -- # es=1 00:19:16.155 07:38:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:16.155 07:38:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:16.155 07:38:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:16.155 07:38:20 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:16.155 07:38:20 -- common/autotest_common.sh@640 -- # local es=0 00:19:16.155 07:38:20 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:16.155 07:38:20 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:16.155 07:38:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:16.155 07:38:20 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:16.155 07:38:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:16.155 07:38:20 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:16.155 07:38:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:16.155 07:38:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:16.155 07:38:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:16.155 07:38:20 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:16.155 07:38:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:16.155 07:38:20 -- target/tls.sh@28 -- # bdevperf_pid=4148544 00:19:16.155 07:38:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:16.155 07:38:20 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:16.155 07:38:20 -- target/tls.sh@31 -- # waitforlisten 4148544 /var/tmp/bdevperf.sock 00:19:16.155 07:38:20 -- common/autotest_common.sh@819 -- # '[' -z 4148544 ']' 00:19:16.155 07:38:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.155 07:38:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:16.155 07:38:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.155 07:38:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:16.155 07:38:20 -- common/autotest_common.sh@10 -- # set +x 00:19:16.155 [2024-10-07 07:38:20.094416] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:16.155 [2024-10-07 07:38:20.094465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4148544 ] 00:19:16.155 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.412 [2024-10-07 07:38:20.145526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.412 [2024-10-07 07:38:20.214852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.975 07:38:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:16.975 07:38:20 -- common/autotest_common.sh@852 -- # return 0 00:19:16.975 07:38:20 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:17.233 [2024-10-07 07:38:21.057533] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.233 [2024-10-07 07:38:21.061888] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:17.233 [2024-10-07 07:38:21.061911] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:17.233 [2024-10-07 07:38:21.061950] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:17.233 [2024-10-07 07:38:21.062674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7227c0 (107): Transport endpoint is not connected 00:19:17.233 [2024-10-07 07:38:21.063663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7227c0 (9): Bad file descriptor 00:19:17.233 [2024-10-07 07:38:21.064664] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:17.233 [2024-10-07 07:38:21.064673] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:17.233 [2024-10-07 07:38:21.064680] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:17.233 request: 00:19:17.233 { 00:19:17.233 "name": "TLSTEST", 00:19:17.233 "trtype": "tcp", 00:19:17.233 "traddr": "10.0.0.2", 00:19:17.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.233 "adrfam": "ipv4", 00:19:17.233 "trsvcid": "4420", 00:19:17.233 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:17.233 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:19:17.233 "method": "bdev_nvme_attach_controller", 00:19:17.233 "req_id": 1 00:19:17.233 } 00:19:17.233 Got JSON-RPC error response 00:19:17.233 response: 00:19:17.233 { 00:19:17.233 "code": -32602, 00:19:17.233 "message": "Invalid parameters" 00:19:17.233 } 00:19:17.233 07:38:21 -- target/tls.sh@36 -- # killprocess 4148544 00:19:17.233 07:38:21 -- common/autotest_common.sh@926 -- # '[' -z 4148544 ']' 00:19:17.233 07:38:21 -- common/autotest_common.sh@930 -- # kill -0 4148544 00:19:17.233 07:38:21 -- common/autotest_common.sh@931 -- # uname 00:19:17.233 07:38:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:17.233 07:38:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4148544 00:19:17.233 07:38:21 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:17.233 07:38:21 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:17.233 07:38:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4148544' 00:19:17.233 killing process with pid 4148544 00:19:17.233 07:38:21 -- common/autotest_common.sh@945 -- # kill 4148544 00:19:17.233 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.233 00:19:17.233 Latency(us) 00:19:17.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.233 =================================================================================================================== 00:19:17.233 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.233 07:38:21 -- common/autotest_common.sh@950 -- # wait 4148544 00:19:17.491 07:38:21 -- target/tls.sh@37 -- # return 1 00:19:17.491 07:38:21 -- common/autotest_common.sh@643 -- # es=1 00:19:17.491 07:38:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:17.491 07:38:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:17.491 07:38:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:17.491 07:38:21 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:17.491 07:38:21 -- common/autotest_common.sh@640 -- # local es=0 00:19:17.491 07:38:21 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:17.491 07:38:21 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:17.491 07:38:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:17.491 07:38:21 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:17.491 07:38:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:17.491 07:38:21 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:17.491 07:38:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:17.491 07:38:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:17.491 07:38:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:17.491 07:38:21 -- target/tls.sh@23 -- # psk= 00:19:17.491 07:38:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.491 07:38:21 -- target/tls.sh@28 -- # bdevperf_pid=4148773 00:19:17.491 07:38:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:17.491 07:38:21 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.491 07:38:21 -- target/tls.sh@31 -- # waitforlisten 4148773 /var/tmp/bdevperf.sock 00:19:17.491 07:38:21 -- common/autotest_common.sh@819 -- # '[' -z 4148773 ']' 00:19:17.491 07:38:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.491 07:38:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:17.491 07:38:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.491 07:38:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:17.491 07:38:21 -- common/autotest_common.sh@10 -- # set +x 00:19:17.491 [2024-10-07 07:38:21.380014] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:17.491 [2024-10-07 07:38:21.380081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4148773 ] 00:19:17.491 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.491 [2024-10-07 07:38:21.430553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.748 [2024-10-07 07:38:21.497147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.312 07:38:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:18.312 07:38:22 -- common/autotest_common.sh@852 -- # return 0 00:19:18.312 07:38:22 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:18.570 [2024-10-07 07:38:22.354064] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:18.570 [2024-10-07 07:38:22.355933] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2324170 (9): Bad file descriptor 00:19:18.570 [2024-10-07 07:38:22.356932] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:18.570 [2024-10-07 07:38:22.356942] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:18.570 [2024-10-07 07:38:22.356949] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:18.570 request: 00:19:18.570 { 00:19:18.570 "name": "TLSTEST", 00:19:18.570 "trtype": "tcp", 00:19:18.570 "traddr": "10.0.0.2", 00:19:18.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:18.570 "adrfam": "ipv4", 00:19:18.570 "trsvcid": "4420", 00:19:18.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.570 "method": "bdev_nvme_attach_controller", 00:19:18.570 "req_id": 1 00:19:18.570 } 00:19:18.570 Got JSON-RPC error response 00:19:18.570 response: 00:19:18.570 { 00:19:18.570 "code": -32602, 00:19:18.570 "message": "Invalid parameters" 00:19:18.570 } 00:19:18.570 07:38:22 -- target/tls.sh@36 -- # killprocess 4148773 00:19:18.571 07:38:22 -- common/autotest_common.sh@926 -- # '[' -z 4148773 ']' 00:19:18.571 07:38:22 -- common/autotest_common.sh@930 -- # kill -0 4148773 00:19:18.571 07:38:22 -- common/autotest_common.sh@931 -- # uname 00:19:18.571 07:38:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:18.571 07:38:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4148773 00:19:18.571 07:38:22 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:18.571 07:38:22 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:18.571 07:38:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4148773' 00:19:18.571 killing process with pid 4148773 00:19:18.571 07:38:22 -- common/autotest_common.sh@945 -- # kill 4148773 00:19:18.571 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.571 00:19:18.571 Latency(us) 00:19:18.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.571 =================================================================================================================== 00:19:18.571 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:18.571 07:38:22 -- common/autotest_common.sh@950 -- # wait 4148773 00:19:18.829 07:38:22 -- target/tls.sh@37 -- # return 1 00:19:18.829 07:38:22 -- common/autotest_common.sh@643 -- # es=1 00:19:18.829 07:38:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:18.829 07:38:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:18.829 07:38:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:18.829 07:38:22 -- target/tls.sh@167 -- # killprocess 4143688 00:19:18.829 07:38:22 -- common/autotest_common.sh@926 -- # '[' -z 4143688 ']' 00:19:18.829 07:38:22 -- common/autotest_common.sh@930 -- # kill -0 4143688 00:19:18.829 07:38:22 -- common/autotest_common.sh@931 -- # uname 00:19:18.829 07:38:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:18.829 07:38:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4143688 00:19:18.829 07:38:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:18.829 07:38:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:18.829 07:38:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4143688' 00:19:18.829 killing process with pid 4143688 00:19:18.829 07:38:22 -- common/autotest_common.sh@945 -- # kill 4143688 00:19:18.829 07:38:22 -- common/autotest_common.sh@950 -- # wait 4143688 00:19:19.087 07:38:22 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:19:19.087 07:38:22 -- target/tls.sh@49 -- # local key hash crc 00:19:19.087 07:38:22 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:19.087 07:38:22 -- target/tls.sh@51 -- # hash=02 00:19:19.087 07:38:22 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:19:19.087 07:38:22 -- target/tls.sh@52 -- # gzip -1 -c 00:19:19.087 07:38:22 -- target/tls.sh@52 -- # head -c 4 00:19:19.087 07:38:22 -- target/tls.sh@52 -- # tail -c8 00:19:19.087 07:38:22 -- target/tls.sh@52 -- # crc='�e�'\''' 00:19:19.087 07:38:22 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:19.087 07:38:22 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:19:19.087 07:38:22 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:19.087 07:38:22 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:19.087 07:38:22 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:19.087 07:38:22 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:19.087 07:38:22 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:19.088 07:38:22 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:19:19.088 07:38:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:19.088 07:38:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:19.088 07:38:22 -- common/autotest_common.sh@10 -- # set +x 00:19:19.088 07:38:22 -- nvmf/common.sh@469 -- # nvmfpid=4149035 00:19:19.088 07:38:22 -- nvmf/common.sh@470 -- # waitforlisten 4149035 00:19:19.088 07:38:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:19.088 07:38:22 -- common/autotest_common.sh@819 -- # '[' -z 4149035 ']' 00:19:19.088 07:38:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.088 07:38:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:19.088 07:38:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.088 07:38:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:19.088 07:38:22 -- common/autotest_common.sh@10 -- # set +x 00:19:19.088 [2024-10-07 07:38:22.967033] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:19.088 [2024-10-07 07:38:22.967082] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.088 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.088 [2024-10-07 07:38:23.025732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.346 [2024-10-07 07:38:23.098612] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:19.346 [2024-10-07 07:38:23.098738] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.346 [2024-10-07 07:38:23.098747] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.346 [2024-10-07 07:38:23.098753] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.346 [2024-10-07 07:38:23.098775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.911 07:38:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:19.911 07:38:23 -- common/autotest_common.sh@852 -- # return 0 00:19:19.911 07:38:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:19.911 07:38:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:19.911 07:38:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.911 07:38:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.911 07:38:23 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:19.911 07:38:23 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:19.911 07:38:23 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:20.171 [2024-10-07 07:38:23.973627] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.171 07:38:23 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:20.430 07:38:24 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:20.430 [2024-10-07 07:38:24.318510] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:20.430 [2024-10-07 07:38:24.318733] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.430 07:38:24 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:20.688 malloc0 00:19:20.688 07:38:24 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:20.945 07:38:24 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:20.945 07:38:24 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:20.945 07:38:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:20.945 07:38:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:20.945 07:38:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:20.945 07:38:24 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:19:20.945 07:38:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:20.945 07:38:24 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:20.945 07:38:24 -- target/tls.sh@28 -- # bdevperf_pid=4149295 00:19:20.945 07:38:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.945 07:38:24 -- target/tls.sh@31 -- # waitforlisten 4149295 /var/tmp/bdevperf.sock 00:19:20.945 07:38:24 -- common/autotest_common.sh@819 -- # '[' -z 4149295 ']' 00:19:20.945 07:38:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.945 07:38:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:20.945 07:38:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.945 07:38:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:20.945 07:38:24 -- common/autotest_common.sh@10 -- # set +x 00:19:20.945 [2024-10-07 07:38:24.889101] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:20.945 [2024-10-07 07:38:24.889148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4149295 ] 00:19:20.945 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.202 [2024-10-07 07:38:24.939284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.202 [2024-10-07 07:38:25.013462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.767 07:38:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:21.767 07:38:25 -- common/autotest_common.sh@852 -- # return 0 00:19:21.767 07:38:25 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:22.024 [2024-10-07 07:38:25.852506] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:22.024 TLSTESTn1 00:19:22.024 07:38:25 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:22.281 Running I/O for 10 seconds... 00:19:32.241 00:19:32.241 Latency(us) 00:19:32.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.241 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:32.241 Verification LBA range: start 0x0 length 0x2000 00:19:32.241 TLSTESTn1 : 10.02 4657.22 18.19 0.00 0.00 27453.13 5523.75 46936.26 00:19:32.241 =================================================================================================================== 00:19:32.241 Total : 4657.22 18.19 0.00 0.00 27453.13 5523.75 46936.26 00:19:32.241 0 00:19:32.241 07:38:36 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:32.241 07:38:36 -- target/tls.sh@45 -- # killprocess 4149295 00:19:32.241 07:38:36 -- common/autotest_common.sh@926 -- # '[' -z 4149295 ']' 00:19:32.241 07:38:36 -- common/autotest_common.sh@930 -- # kill -0 4149295 00:19:32.241 07:38:36 -- common/autotest_common.sh@931 -- # uname 00:19:32.241 07:38:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:32.241 07:38:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4149295 00:19:32.241 07:38:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:32.241 07:38:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:32.241 07:38:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4149295' 00:19:32.241 killing process with pid 4149295 00:19:32.241 07:38:36 -- common/autotest_common.sh@945 -- # kill 4149295 00:19:32.241 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.241 00:19:32.241 Latency(us) 00:19:32.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.241 =================================================================================================================== 00:19:32.241 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.241 07:38:36 -- common/autotest_common.sh@950 -- # wait 4149295 00:19:32.498 07:38:36 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:32.498 07:38:36 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:32.498 07:38:36 -- common/autotest_common.sh@640 -- # local es=0 00:19:32.498 07:38:36 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:32.498 07:38:36 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:32.498 07:38:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:32.498 07:38:36 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:32.498 07:38:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:32.498 07:38:36 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:32.498 07:38:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.498 07:38:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:32.499 07:38:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:32.499 07:38:36 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:19:32.499 07:38:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.499 07:38:36 -- target/tls.sh@28 -- # bdevperf_pid=4151118 00:19:32.499 07:38:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.499 07:38:36 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.499 07:38:36 -- target/tls.sh@31 -- # waitforlisten 4151118 /var/tmp/bdevperf.sock 00:19:32.499 07:38:36 -- common/autotest_common.sh@819 -- # '[' -z 4151118 ']' 00:19:32.499 07:38:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.499 07:38:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:32.499 07:38:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.499 07:38:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:32.499 07:38:36 -- common/autotest_common.sh@10 -- # set +x 00:19:32.499 [2024-10-07 07:38:36.398149] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:32.499 [2024-10-07 07:38:36.398197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4151118 ] 00:19:32.499 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.499 [2024-10-07 07:38:36.449195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.757 [2024-10-07 07:38:36.514855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.321 07:38:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:33.321 07:38:37 -- common/autotest_common.sh@852 -- # return 0 00:19:33.321 07:38:37 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:33.579 [2024-10-07 07:38:37.372585] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.579 [2024-10-07 07:38:37.372619] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:33.579 request: 00:19:33.579 { 00:19:33.579 "name": "TLSTEST", 00:19:33.579 "trtype": "tcp", 00:19:33.579 "traddr": "10.0.0.2", 00:19:33.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.579 "adrfam": "ipv4", 00:19:33.579 "trsvcid": "4420", 00:19:33.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.579 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:33.579 "method": "bdev_nvme_attach_controller", 00:19:33.579 "req_id": 1 00:19:33.579 } 00:19:33.579 Got JSON-RPC error response 00:19:33.579 response: 00:19:33.579 { 00:19:33.579 "code": -22, 00:19:33.579 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:19:33.579 } 00:19:33.579 07:38:37 -- target/tls.sh@36 -- # killprocess 4151118 00:19:33.579 07:38:37 -- common/autotest_common.sh@926 -- # '[' -z 4151118 ']' 00:19:33.579 07:38:37 -- common/autotest_common.sh@930 -- # kill -0 4151118 00:19:33.579 07:38:37 -- common/autotest_common.sh@931 -- # uname 00:19:33.579 07:38:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:33.579 07:38:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4151118 00:19:33.579 07:38:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:33.579 07:38:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:33.579 07:38:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4151118' 00:19:33.579 killing process with pid 4151118 00:19:33.579 07:38:37 -- common/autotest_common.sh@945 -- # kill 4151118 00:19:33.579 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.579 00:19:33.579 Latency(us) 00:19:33.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.579 =================================================================================================================== 00:19:33.579 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:33.579 07:38:37 -- common/autotest_common.sh@950 -- # wait 4151118 00:19:33.837 07:38:37 -- target/tls.sh@37 -- # return 1 00:19:33.837 07:38:37 -- common/autotest_common.sh@643 -- # es=1 00:19:33.837 07:38:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:33.837 07:38:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:33.837 07:38:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:33.837 07:38:37 -- target/tls.sh@183 -- # killprocess 4149035 00:19:33.837 07:38:37 -- common/autotest_common.sh@926 -- # '[' -z 4149035 ']' 00:19:33.837 07:38:37 -- common/autotest_common.sh@930 -- # kill -0 4149035 00:19:33.837 07:38:37 -- common/autotest_common.sh@931 -- # uname 00:19:33.837 07:38:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:33.837 07:38:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4149035 00:19:33.837 07:38:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:33.837 07:38:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:33.837 07:38:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4149035' 00:19:33.837 killing process with pid 4149035 00:19:33.837 07:38:37 -- common/autotest_common.sh@945 -- # kill 4149035 00:19:33.837 07:38:37 -- common/autotest_common.sh@950 -- # wait 4149035 00:19:34.095 07:38:37 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:34.095 07:38:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:34.095 07:38:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:34.095 07:38:37 -- common/autotest_common.sh@10 -- # set +x 00:19:34.095 07:38:37 -- nvmf/common.sh@469 -- # nvmfpid=4151443 00:19:34.095 07:38:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:34.095 07:38:37 -- nvmf/common.sh@470 -- # waitforlisten 4151443 00:19:34.095 07:38:37 -- common/autotest_common.sh@819 -- # '[' -z 4151443 ']' 00:19:34.095 07:38:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.095 07:38:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:34.095 07:38:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.095 07:38:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:34.095 07:38:37 -- common/autotest_common.sh@10 -- # set +x 00:19:34.095 [2024-10-07 07:38:37.959910] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:34.095 [2024-10-07 07:38:37.959954] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.095 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.095 [2024-10-07 07:38:38.020669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.353 [2024-10-07 07:38:38.089243] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:34.353 [2024-10-07 07:38:38.089350] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.353 [2024-10-07 07:38:38.089358] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.353 [2024-10-07 07:38:38.089363] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.353 [2024-10-07 07:38:38.089380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.918 07:38:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:34.918 07:38:38 -- common/autotest_common.sh@852 -- # return 0 00:19:34.918 07:38:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:34.918 07:38:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:34.918 07:38:38 -- common/autotest_common.sh@10 -- # set +x 00:19:34.918 07:38:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.918 07:38:38 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:34.918 07:38:38 -- common/autotest_common.sh@640 -- # local es=0 00:19:34.918 07:38:38 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:34.918 07:38:38 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:19:34.918 07:38:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:34.918 07:38:38 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:19:34.918 07:38:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:34.918 07:38:38 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:34.918 07:38:38 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:34.918 07:38:38 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:35.175 [2024-10-07 07:38:38.971534] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.175 07:38:38 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:35.433 07:38:39 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:35.433 [2024-10-07 07:38:39.328473] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:35.433 [2024-10-07 07:38:39.328683] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.433 07:38:39 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:35.691 malloc0 00:19:35.691 07:38:39 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:35.949 07:38:39 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:35.949 [2024-10-07 07:38:39.837761] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:35.949 [2024-10-07 07:38:39.837785] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:35.949 [2024-10-07 07:38:39.837800] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:19:35.949 request: 00:19:35.949 { 00:19:35.949 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.949 "host": "nqn.2016-06.io.spdk:host1", 00:19:35.949 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:35.949 "method": "nvmf_subsystem_add_host", 00:19:35.949 "req_id": 1 00:19:35.949 } 00:19:35.949 Got JSON-RPC error response 00:19:35.949 response: 00:19:35.949 { 00:19:35.949 "code": -32603, 00:19:35.949 "message": "Internal error" 00:19:35.949 } 00:19:35.949 07:38:39 -- common/autotest_common.sh@643 -- # es=1 00:19:35.949 07:38:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:35.949 07:38:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:35.950 07:38:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:35.950 07:38:39 -- target/tls.sh@189 -- # killprocess 4151443 00:19:35.950 07:38:39 -- common/autotest_common.sh@926 -- # '[' -z 4151443 ']' 00:19:35.950 07:38:39 -- common/autotest_common.sh@930 -- # kill -0 4151443 00:19:35.950 07:38:39 -- common/autotest_common.sh@931 -- # uname 00:19:35.950 07:38:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:35.950 07:38:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4151443 00:19:35.950 07:38:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:35.950 07:38:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:35.950 07:38:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4151443' 00:19:35.950 killing process with pid 4151443 00:19:35.950 07:38:39 -- common/autotest_common.sh@945 -- # kill 4151443 00:19:35.950 07:38:39 -- common/autotest_common.sh@950 -- # wait 4151443 00:19:36.209 07:38:40 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:36.209 07:38:40 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:19:36.209 07:38:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:36.209 07:38:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:36.209 07:38:40 -- common/autotest_common.sh@10 -- # set +x 00:19:36.209 07:38:40 -- nvmf/common.sh@469 -- # nvmfpid=4151857 00:19:36.209 07:38:40 -- nvmf/common.sh@470 -- # waitforlisten 4151857 00:19:36.209 07:38:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:36.209 07:38:40 -- common/autotest_common.sh@819 -- # '[' -z 4151857 ']' 00:19:36.209 07:38:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.209 07:38:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:36.209 07:38:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.209 07:38:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:36.209 07:38:40 -- common/autotest_common.sh@10 -- # set +x 00:19:36.467 [2024-10-07 07:38:40.186668] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:36.467 [2024-10-07 07:38:40.186714] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.467 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.467 [2024-10-07 07:38:40.245677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.467 [2024-10-07 07:38:40.310448] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:36.467 [2024-10-07 07:38:40.310561] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.467 [2024-10-07 07:38:40.310569] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.467 [2024-10-07 07:38:40.310575] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.467 [2024-10-07 07:38:40.310599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.031 07:38:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:37.031 07:38:40 -- common/autotest_common.sh@852 -- # return 0 00:19:37.031 07:38:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:37.031 07:38:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:37.031 07:38:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.288 07:38:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.288 07:38:41 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:37.288 07:38:41 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:37.288 07:38:41 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:37.288 [2024-10-07 07:38:41.189503] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.288 07:38:41 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:37.545 07:38:41 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:37.801 [2024-10-07 07:38:41.518342] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:37.801 [2024-10-07 07:38:41.518548] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.801 07:38:41 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:37.801 malloc0 00:19:37.801 07:38:41 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:38.057 07:38:41 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:38.314 07:38:42 -- target/tls.sh@197 -- # bdevperf_pid=4152210 00:19:38.314 07:38:42 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:38.314 07:38:42 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.314 07:38:42 -- target/tls.sh@200 -- # waitforlisten 4152210 /var/tmp/bdevperf.sock 00:19:38.314 07:38:42 -- common/autotest_common.sh@819 -- # '[' -z 4152210 ']' 00:19:38.314 07:38:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.314 07:38:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:38.314 07:38:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.314 07:38:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:38.314 07:38:42 -- common/autotest_common.sh@10 -- # set +x 00:19:38.314 [2024-10-07 07:38:42.085785] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:38.314 [2024-10-07 07:38:42.085836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4152210 ] 00:19:38.314 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.315 [2024-10-07 07:38:42.139574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.315 [2024-10-07 07:38:42.209258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.259 07:38:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:39.259 07:38:42 -- common/autotest_common.sh@852 -- # return 0 00:19:39.259 07:38:42 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:39.259 [2024-10-07 07:38:43.038891] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.259 TLSTESTn1 00:19:39.259 07:38:43 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:39.515 07:38:43 -- target/tls.sh@205 -- # tgtconf='{ 00:19:39.515 "subsystems": [ 00:19:39.515 { 00:19:39.515 "subsystem": "iobuf", 00:19:39.515 "config": [ 00:19:39.515 { 00:19:39.515 "method": "iobuf_set_options", 00:19:39.515 "params": { 00:19:39.515 "small_pool_count": 8192, 00:19:39.515 "large_pool_count": 1024, 00:19:39.515 "small_bufsize": 8192, 00:19:39.515 "large_bufsize": 135168 00:19:39.515 } 00:19:39.515 } 00:19:39.515 ] 00:19:39.515 }, 00:19:39.515 { 00:19:39.515 "subsystem": "sock", 00:19:39.515 "config": [ 00:19:39.515 { 00:19:39.515 "method": "sock_impl_set_options", 00:19:39.515 "params": { 00:19:39.515 "impl_name": "posix", 00:19:39.515 "recv_buf_size": 2097152, 00:19:39.515 "send_buf_size": 2097152, 00:19:39.516 "enable_recv_pipe": true, 00:19:39.516 "enable_quickack": false, 00:19:39.516 "enable_placement_id": 0, 00:19:39.516 "enable_zerocopy_send_server": true, 00:19:39.516 "enable_zerocopy_send_client": false, 00:19:39.516 "zerocopy_threshold": 0, 00:19:39.516 "tls_version": 0, 00:19:39.516 "enable_ktls": false 00:19:39.516 } 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "method": "sock_impl_set_options", 00:19:39.516 "params": { 00:19:39.516 "impl_name": "ssl", 00:19:39.516 "recv_buf_size": 4096, 00:19:39.516 "send_buf_size": 4096, 00:19:39.516 "enable_recv_pipe": true, 00:19:39.516 "enable_quickack": false, 00:19:39.516 "enable_placement_id": 0, 00:19:39.516 "enable_zerocopy_send_server": true, 00:19:39.516 "enable_zerocopy_send_client": false, 00:19:39.516 "zerocopy_threshold": 0, 00:19:39.516 "tls_version": 0, 00:19:39.516 "enable_ktls": false 00:19:39.516 } 00:19:39.516 } 00:19:39.516 ] 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "subsystem": "vmd", 00:19:39.516 "config": [] 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "subsystem": "accel", 00:19:39.516 "config": [ 00:19:39.516 { 00:19:39.516 "method": "accel_set_options", 00:19:39.516 "params": { 00:19:39.516 "small_cache_size": 128, 00:19:39.516 "large_cache_size": 16, 00:19:39.516 "task_count": 2048, 00:19:39.516 "sequence_count": 2048, 00:19:39.516 "buf_count": 2048 00:19:39.516 } 00:19:39.516 } 00:19:39.516 ] 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "subsystem": "bdev", 00:19:39.516 "config": [ 00:19:39.516 { 00:19:39.516 "method": "bdev_set_options", 00:19:39.516 "params": { 00:19:39.516 "bdev_io_pool_size": 65535, 00:19:39.516 "bdev_io_cache_size": 256, 00:19:39.516 "bdev_auto_examine": true, 00:19:39.516 "iobuf_small_cache_size": 128, 00:19:39.516 "iobuf_large_cache_size": 16 00:19:39.516 } 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "method": "bdev_raid_set_options", 00:19:39.516 "params": { 00:19:39.516 "process_window_size_kb": 1024 00:19:39.516 } 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "method": "bdev_iscsi_set_options", 00:19:39.516 "params": { 00:19:39.516 "timeout_sec": 30 00:19:39.516 } 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "method": "bdev_nvme_set_options", 00:19:39.516 "params": { 00:19:39.516 "action_on_timeout": "none", 00:19:39.516 "timeout_us": 0, 00:19:39.516 "timeout_admin_us": 0, 00:19:39.516 "keep_alive_timeout_ms": 10000, 00:19:39.516 "transport_retry_count": 4, 00:19:39.516 "arbitration_burst": 0, 00:19:39.516 "low_priority_weight": 0, 00:19:39.516 "medium_priority_weight": 0, 00:19:39.516 "high_priority_weight": 0, 00:19:39.516 "nvme_adminq_poll_period_us": 10000, 00:19:39.516 "nvme_ioq_poll_period_us": 0, 00:19:39.516 "io_queue_requests": 0, 00:19:39.516 "delay_cmd_submit": true, 00:19:39.516 "bdev_retry_count": 3, 00:19:39.516 "transport_ack_timeout": 0, 00:19:39.516 "ctrlr_loss_timeout_sec": 0, 00:19:39.516 "reconnect_delay_sec": 0, 00:19:39.516 "fast_io_fail_timeout_sec": 0, 00:19:39.516 "generate_uuids": false, 00:19:39.516 "transport_tos": 0, 00:19:39.516 "io_path_stat": false, 00:19:39.516 "allow_accel_sequence": false 00:19:39.516 } 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "method": "bdev_nvme_set_hotplug", 00:19:39.516 "params": { 00:19:39.516 "period_us": 100000, 00:19:39.516 "enable": false 00:19:39.516 } 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "method": "bdev_malloc_create", 00:19:39.516 "params": { 00:19:39.516 "name": "malloc0", 00:19:39.516 "num_blocks": 8192, 00:19:39.516 "block_size": 4096, 00:19:39.516 "physical_block_size": 4096, 00:19:39.516 "uuid": "580f3e5e-458c-4f0b-b513-0495f4a0e69a", 00:19:39.516 "optimal_io_boundary": 0 00:19:39.516 } 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "method": "bdev_wait_for_examine" 00:19:39.516 } 00:19:39.516 ] 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "subsystem": "nbd", 00:19:39.516 "config": [] 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "subsystem": "scheduler", 00:19:39.516 "config": [ 00:19:39.516 { 00:19:39.516 "method": "framework_set_scheduler", 00:19:39.516 "params": { 00:19:39.516 "name": "static" 00:19:39.516 } 00:19:39.516 } 00:19:39.516 ] 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "subsystem": "nvmf", 00:19:39.516 "config": [ 00:19:39.516 { 00:19:39.516 "method": "nvmf_set_config", 00:19:39.516 "params": { 00:19:39.516 "discovery_filter": "match_any", 00:19:39.516 "admin_cmd_passthru": { 00:19:39.516 "identify_ctrlr": false 00:19:39.516 } 00:19:39.516 } 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "method": "nvmf_set_max_subsystems", 00:19:39.516 "params": { 00:19:39.516 "max_subsystems": 1024 00:19:39.516 } 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "method": "nvmf_set_crdt", 00:19:39.516 "params": { 00:19:39.516 "crdt1": 0, 00:19:39.516 "crdt2": 0, 00:19:39.516 "crdt3": 0 00:19:39.516 } 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "method": "nvmf_create_transport", 00:19:39.516 "params": { 00:19:39.516 "trtype": "TCP", 00:19:39.516 "max_queue_depth": 128, 00:19:39.516 "max_io_qpairs_per_ctrlr": 127, 00:19:39.516 "in_capsule_data_size": 4096, 00:19:39.516 "max_io_size": 131072, 00:19:39.516 "io_unit_size": 131072, 00:19:39.516 "max_aq_depth": 128, 00:19:39.516 "num_shared_buffers": 511, 00:19:39.516 "buf_cache_size": 4294967295, 00:19:39.516 "dif_insert_or_strip": false, 00:19:39.516 "zcopy": false, 00:19:39.516 "c2h_success": false, 00:19:39.516 "sock_priority": 0, 00:19:39.516 "abort_timeout_sec": 1 00:19:39.516 } 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "method": "nvmf_create_subsystem", 00:19:39.516 "params": { 00:19:39.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.516 "allow_any_host": false, 00:19:39.516 "serial_number": "SPDK00000000000001", 00:19:39.516 "model_number": "SPDK bdev Controller", 00:19:39.516 "max_namespaces": 10, 00:19:39.516 "min_cntlid": 1, 00:19:39.516 "max_cntlid": 65519, 00:19:39.516 "ana_reporting": false 00:19:39.516 } 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "method": "nvmf_subsystem_add_host", 00:19:39.516 "params": { 00:19:39.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.516 "host": "nqn.2016-06.io.spdk:host1", 00:19:39.516 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:19:39.516 } 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "method": "nvmf_subsystem_add_ns", 00:19:39.516 "params": { 00:19:39.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.516 "namespace": { 00:19:39.516 "nsid": 1, 00:19:39.516 "bdev_name": "malloc0", 00:19:39.516 "nguid": "580F3E5E458C4F0BB5130495F4A0E69A", 00:19:39.516 "uuid": "580f3e5e-458c-4f0b-b513-0495f4a0e69a" 00:19:39.516 } 00:19:39.516 } 00:19:39.516 }, 00:19:39.516 { 00:19:39.516 "method": "nvmf_subsystem_add_listener", 00:19:39.516 "params": { 00:19:39.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.516 "listen_address": { 00:19:39.516 "trtype": "TCP", 00:19:39.516 "adrfam": "IPv4", 00:19:39.516 "traddr": "10.0.0.2", 00:19:39.516 "trsvcid": "4420" 00:19:39.516 }, 00:19:39.516 "secure_channel": true 00:19:39.516 } 00:19:39.516 } 00:19:39.516 ] 00:19:39.516 } 00:19:39.516 ] 00:19:39.516 }' 00:19:39.516 07:38:43 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:39.774 07:38:43 -- target/tls.sh@206 -- # bdevperfconf='{ 00:19:39.774 "subsystems": [ 00:19:39.774 { 00:19:39.774 "subsystem": "iobuf", 00:19:39.774 "config": [ 00:19:39.774 { 00:19:39.774 "method": "iobuf_set_options", 00:19:39.774 "params": { 00:19:39.774 "small_pool_count": 8192, 00:19:39.774 "large_pool_count": 1024, 00:19:39.774 "small_bufsize": 8192, 00:19:39.774 "large_bufsize": 135168 00:19:39.774 } 00:19:39.774 } 00:19:39.774 ] 00:19:39.774 }, 00:19:39.774 { 00:19:39.774 "subsystem": "sock", 00:19:39.774 "config": [ 00:19:39.774 { 00:19:39.774 "method": "sock_impl_set_options", 00:19:39.774 "params": { 00:19:39.774 "impl_name": "posix", 00:19:39.774 "recv_buf_size": 2097152, 00:19:39.774 "send_buf_size": 2097152, 00:19:39.774 "enable_recv_pipe": true, 00:19:39.774 "enable_quickack": false, 00:19:39.774 "enable_placement_id": 0, 00:19:39.774 "enable_zerocopy_send_server": true, 00:19:39.774 "enable_zerocopy_send_client": false, 00:19:39.774 "zerocopy_threshold": 0, 00:19:39.774 "tls_version": 0, 00:19:39.774 "enable_ktls": false 00:19:39.774 } 00:19:39.774 }, 00:19:39.774 { 00:19:39.774 "method": "sock_impl_set_options", 00:19:39.774 "params": { 00:19:39.774 "impl_name": "ssl", 00:19:39.774 "recv_buf_size": 4096, 00:19:39.774 "send_buf_size": 4096, 00:19:39.774 "enable_recv_pipe": true, 00:19:39.774 "enable_quickack": false, 00:19:39.774 "enable_placement_id": 0, 00:19:39.774 "enable_zerocopy_send_server": true, 00:19:39.774 "enable_zerocopy_send_client": false, 00:19:39.774 "zerocopy_threshold": 0, 00:19:39.774 "tls_version": 0, 00:19:39.774 "enable_ktls": false 00:19:39.774 } 00:19:39.774 } 00:19:39.774 ] 00:19:39.774 }, 00:19:39.774 { 00:19:39.774 "subsystem": "vmd", 00:19:39.774 "config": [] 00:19:39.774 }, 00:19:39.774 { 00:19:39.774 "subsystem": "accel", 00:19:39.774 "config": [ 00:19:39.774 { 00:19:39.774 "method": "accel_set_options", 00:19:39.774 "params": { 00:19:39.774 "small_cache_size": 128, 00:19:39.774 "large_cache_size": 16, 00:19:39.774 "task_count": 2048, 00:19:39.774 "sequence_count": 2048, 00:19:39.774 "buf_count": 2048 00:19:39.774 } 00:19:39.774 } 00:19:39.774 ] 00:19:39.774 }, 00:19:39.774 { 00:19:39.774 "subsystem": "bdev", 00:19:39.774 "config": [ 00:19:39.774 { 00:19:39.774 "method": "bdev_set_options", 00:19:39.774 "params": { 00:19:39.774 "bdev_io_pool_size": 65535, 00:19:39.774 "bdev_io_cache_size": 256, 00:19:39.774 "bdev_auto_examine": true, 00:19:39.774 "iobuf_small_cache_size": 128, 00:19:39.774 "iobuf_large_cache_size": 16 00:19:39.774 } 00:19:39.774 }, 00:19:39.774 { 00:19:39.774 "method": "bdev_raid_set_options", 00:19:39.774 "params": { 00:19:39.774 "process_window_size_kb": 1024 00:19:39.774 } 00:19:39.774 }, 00:19:39.774 { 00:19:39.774 "method": "bdev_iscsi_set_options", 00:19:39.774 "params": { 00:19:39.774 "timeout_sec": 30 00:19:39.774 } 00:19:39.774 }, 00:19:39.774 { 00:19:39.774 "method": "bdev_nvme_set_options", 00:19:39.774 "params": { 00:19:39.774 "action_on_timeout": "none", 00:19:39.774 "timeout_us": 0, 00:19:39.774 "timeout_admin_us": 0, 00:19:39.774 "keep_alive_timeout_ms": 10000, 00:19:39.774 "transport_retry_count": 4, 00:19:39.774 "arbitration_burst": 0, 00:19:39.774 "low_priority_weight": 0, 00:19:39.774 "medium_priority_weight": 0, 00:19:39.774 "high_priority_weight": 0, 00:19:39.774 "nvme_adminq_poll_period_us": 10000, 00:19:39.774 "nvme_ioq_poll_period_us": 0, 00:19:39.774 "io_queue_requests": 512, 00:19:39.774 "delay_cmd_submit": true, 00:19:39.774 "bdev_retry_count": 3, 00:19:39.774 "transport_ack_timeout": 0, 00:19:39.774 "ctrlr_loss_timeout_sec": 0, 00:19:39.774 "reconnect_delay_sec": 0, 00:19:39.774 "fast_io_fail_timeout_sec": 0, 00:19:39.774 "generate_uuids": false, 00:19:39.774 "transport_tos": 0, 00:19:39.774 "io_path_stat": false, 00:19:39.774 "allow_accel_sequence": false 00:19:39.774 } 00:19:39.774 }, 00:19:39.774 { 00:19:39.774 "method": "bdev_nvme_attach_controller", 00:19:39.774 "params": { 00:19:39.774 "name": "TLSTEST", 00:19:39.774 "trtype": "TCP", 00:19:39.774 "adrfam": "IPv4", 00:19:39.774 "traddr": "10.0.0.2", 00:19:39.774 "trsvcid": "4420", 00:19:39.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.774 "prchk_reftag": false, 00:19:39.774 "prchk_guard": false, 00:19:39.774 "ctrlr_loss_timeout_sec": 0, 00:19:39.774 "reconnect_delay_sec": 0, 00:19:39.774 "fast_io_fail_timeout_sec": 0, 00:19:39.774 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:39.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.774 "hdgst": false, 00:19:39.774 "ddgst": false 00:19:39.774 } 00:19:39.774 }, 00:19:39.774 { 00:19:39.774 "method": "bdev_nvme_set_hotplug", 00:19:39.774 "params": { 00:19:39.774 "period_us": 100000, 00:19:39.774 "enable": false 00:19:39.775 } 00:19:39.775 }, 00:19:39.775 { 00:19:39.775 "method": "bdev_wait_for_examine" 00:19:39.775 } 00:19:39.775 ] 00:19:39.775 }, 00:19:39.775 { 00:19:39.775 "subsystem": "nbd", 00:19:39.775 "config": [] 00:19:39.775 } 00:19:39.775 ] 00:19:39.775 }' 00:19:39.775 07:38:43 -- target/tls.sh@208 -- # killprocess 4152210 00:19:39.775 07:38:43 -- common/autotest_common.sh@926 -- # '[' -z 4152210 ']' 00:19:39.775 07:38:43 -- common/autotest_common.sh@930 -- # kill -0 4152210 00:19:39.775 07:38:43 -- common/autotest_common.sh@931 -- # uname 00:19:39.775 07:38:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:39.775 07:38:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4152210 00:19:39.775 07:38:43 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:39.775 07:38:43 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:39.775 07:38:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4152210' 00:19:39.775 killing process with pid 4152210 00:19:39.775 07:38:43 -- common/autotest_common.sh@945 -- # kill 4152210 00:19:39.775 Received shutdown signal, test time was about 10.000000 seconds 00:19:39.775 00:19:39.775 Latency(us) 00:19:39.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.775 =================================================================================================================== 00:19:39.775 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:39.775 07:38:43 -- common/autotest_common.sh@950 -- # wait 4152210 00:19:40.032 07:38:43 -- target/tls.sh@209 -- # killprocess 4151857 00:19:40.032 07:38:43 -- common/autotest_common.sh@926 -- # '[' -z 4151857 ']' 00:19:40.032 07:38:43 -- common/autotest_common.sh@930 -- # kill -0 4151857 00:19:40.032 07:38:43 -- common/autotest_common.sh@931 -- # uname 00:19:40.032 07:38:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:40.032 07:38:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4151857 00:19:40.032 07:38:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:40.032 07:38:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:40.032 07:38:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4151857' 00:19:40.032 killing process with pid 4151857 00:19:40.032 07:38:43 -- common/autotest_common.sh@945 -- # kill 4151857 00:19:40.032 07:38:43 -- common/autotest_common.sh@950 -- # wait 4151857 00:19:40.289 07:38:44 -- target/tls.sh@212 -- # echo '{ 00:19:40.289 "subsystems": [ 00:19:40.289 { 00:19:40.289 "subsystem": "iobuf", 00:19:40.289 "config": [ 00:19:40.289 { 00:19:40.289 "method": "iobuf_set_options", 00:19:40.289 "params": { 00:19:40.289 "small_pool_count": 8192, 00:19:40.289 "large_pool_count": 1024, 00:19:40.289 "small_bufsize": 8192, 00:19:40.289 "large_bufsize": 135168 00:19:40.289 } 00:19:40.289 } 00:19:40.289 ] 00:19:40.289 }, 00:19:40.289 { 00:19:40.289 "subsystem": "sock", 00:19:40.289 "config": [ 00:19:40.289 { 00:19:40.289 "method": "sock_impl_set_options", 00:19:40.289 "params": { 00:19:40.289 "impl_name": "posix", 00:19:40.289 "recv_buf_size": 2097152, 00:19:40.289 "send_buf_size": 2097152, 00:19:40.289 "enable_recv_pipe": true, 00:19:40.289 "enable_quickack": false, 00:19:40.289 "enable_placement_id": 0, 00:19:40.289 "enable_zerocopy_send_server": true, 00:19:40.289 "enable_zerocopy_send_client": false, 00:19:40.289 "zerocopy_threshold": 0, 00:19:40.289 "tls_version": 0, 00:19:40.289 "enable_ktls": false 00:19:40.289 } 00:19:40.289 }, 00:19:40.289 { 00:19:40.289 "method": "sock_impl_set_options", 00:19:40.289 "params": { 00:19:40.289 "impl_name": "ssl", 00:19:40.289 "recv_buf_size": 4096, 00:19:40.289 "send_buf_size": 4096, 00:19:40.289 "enable_recv_pipe": true, 00:19:40.289 "enable_quickack": false, 00:19:40.289 "enable_placement_id": 0, 00:19:40.289 "enable_zerocopy_send_server": true, 00:19:40.289 "enable_zerocopy_send_client": false, 00:19:40.289 "zerocopy_threshold": 0, 00:19:40.289 "tls_version": 0, 00:19:40.289 "enable_ktls": false 00:19:40.289 } 00:19:40.289 } 00:19:40.289 ] 00:19:40.289 }, 00:19:40.289 { 00:19:40.289 "subsystem": "vmd", 00:19:40.289 "config": [] 00:19:40.289 }, 00:19:40.289 { 00:19:40.289 "subsystem": "accel", 00:19:40.289 "config": [ 00:19:40.289 { 00:19:40.289 "method": "accel_set_options", 00:19:40.289 "params": { 00:19:40.289 "small_cache_size": 128, 00:19:40.289 "large_cache_size": 16, 00:19:40.289 "task_count": 2048, 00:19:40.289 "sequence_count": 2048, 00:19:40.289 "buf_count": 2048 00:19:40.289 } 00:19:40.289 } 00:19:40.289 ] 00:19:40.289 }, 00:19:40.289 { 00:19:40.289 "subsystem": "bdev", 00:19:40.289 "config": [ 00:19:40.289 { 00:19:40.289 "method": "bdev_set_options", 00:19:40.289 "params": { 00:19:40.289 "bdev_io_pool_size": 65535, 00:19:40.289 "bdev_io_cache_size": 256, 00:19:40.289 "bdev_auto_examine": true, 00:19:40.289 "iobuf_small_cache_size": 128, 00:19:40.289 "iobuf_large_cache_size": 16 00:19:40.289 } 00:19:40.289 }, 00:19:40.289 { 00:19:40.289 "method": "bdev_raid_set_options", 00:19:40.289 "params": { 00:19:40.289 "process_window_size_kb": 1024 00:19:40.289 } 00:19:40.289 }, 00:19:40.289 { 00:19:40.289 "method": "bdev_iscsi_set_options", 00:19:40.289 "params": { 00:19:40.289 "timeout_sec": 30 00:19:40.289 } 00:19:40.289 }, 00:19:40.289 { 00:19:40.289 "method": "bdev_nvme_set_options", 00:19:40.290 "params": { 00:19:40.290 "action_on_timeout": "none", 00:19:40.290 "timeout_us": 0, 00:19:40.290 "timeout_admin_us": 0, 00:19:40.290 "keep_alive_timeout_ms": 10000, 00:19:40.290 "transport_retry_count": 4, 00:19:40.290 "arbitration_burst": 0, 00:19:40.290 "low_priority_weight": 0, 00:19:40.290 "medium_priority_weight": 0, 00:19:40.290 "high_priority_weight": 0, 00:19:40.290 "nvme_adminq_poll_period_us": 10000, 00:19:40.290 "nvme_ioq_poll_period_us": 0, 00:19:40.290 "io_queue_requests": 0, 00:19:40.290 "delay_cmd_submit": true, 00:19:40.290 "bdev_retry_count": 3, 00:19:40.290 "transport_ack_timeout": 0, 00:19:40.290 "ctrlr_loss_timeout_sec": 0, 00:19:40.290 "reconnect_delay_sec": 0, 00:19:40.290 "fast_io_fail_timeout_sec": 0, 00:19:40.290 "generate_uuids": false, 00:19:40.290 "transport_tos": 0, 00:19:40.290 "io_path_stat": false, 00:19:40.290 "allow_accel_sequence": false 00:19:40.290 } 00:19:40.290 }, 00:19:40.290 { 00:19:40.290 "method": "bdev_nvme_set_hotplug", 00:19:40.290 "params": { 00:19:40.290 "period_us": 100000, 00:19:40.290 "enable": false 00:19:40.290 } 00:19:40.290 }, 00:19:40.290 { 00:19:40.290 "method": "bdev_malloc_create", 00:19:40.290 "params": { 00:19:40.290 "name": "malloc0", 00:19:40.290 "num_blocks": 8192, 00:19:40.290 "block_size": 4096, 00:19:40.290 "physical_block_size": 4096, 00:19:40.290 "uuid": "580f3e5e-458c-4f0b-b513-0495f4a0e69a", 00:19:40.290 "optimal_io_boundary": 0 00:19:40.290 } 00:19:40.290 }, 00:19:40.290 { 00:19:40.290 "method": "bdev_wait_for_examine" 00:19:40.290 } 00:19:40.290 ] 00:19:40.290 }, 00:19:40.290 { 00:19:40.290 "subsystem": "nbd", 00:19:40.290 "config": [] 00:19:40.290 }, 00:19:40.290 { 00:19:40.290 "subsystem": "scheduler", 00:19:40.290 "config": [ 00:19:40.290 { 00:19:40.290 "method": "framework_set_scheduler", 00:19:40.290 "params": { 00:19:40.290 "name": "static" 00:19:40.290 } 00:19:40.290 } 00:19:40.290 ] 00:19:40.290 }, 00:19:40.290 { 00:19:40.290 "subsystem": "nvmf", 00:19:40.290 "config": [ 00:19:40.290 { 00:19:40.290 "method": "nvmf_set_config", 00:19:40.290 "params": { 00:19:40.290 "discovery_filter": "match_any", 00:19:40.290 "admin_cmd_passthru": { 00:19:40.290 "identify_ctrlr": false 00:19:40.290 } 00:19:40.290 } 00:19:40.290 }, 00:19:40.290 { 00:19:40.290 "method": "nvmf_set_max_subsystems", 00:19:40.290 "params": { 00:19:40.290 "max_subsystems": 1024 00:19:40.290 } 00:19:40.290 }, 00:19:40.290 { 00:19:40.290 "method": "nvmf_set_crdt", 00:19:40.290 "params": { 00:19:40.290 "crdt1": 0, 00:19:40.290 "crdt2": 0, 00:19:40.290 "crdt3": 0 00:19:40.290 } 00:19:40.290 }, 00:19:40.290 { 00:19:40.290 "method": "nvmf_create_transport", 00:19:40.290 "params": { 00:19:40.290 "trtype": "TCP", 00:19:40.290 "max_queue_depth": 128, 00:19:40.290 "max_io_qpairs_per_ctrlr": 127, 00:19:40.290 "in_capsule_data_size": 4096, 00:19:40.290 "max_io_size": 131072, 00:19:40.290 "io_unit_size": 131072, 00:19:40.290 "max_aq_depth": 128, 00:19:40.290 "num_shared_buffers": 511, 00:19:40.290 "buf_cache_size": 4294967295, 00:19:40.290 "dif_insert_or_strip": false, 00:19:40.290 "zcopy": false, 00:19:40.290 "c2h_success": false, 00:19:40.290 "sock_priority": 0, 00:19:40.290 "abort_timeout_sec": 1 00:19:40.290 } 00:19:40.290 }, 00:19:40.290 { 00:19:40.290 "method": "nvmf_create_subsystem", 00:19:40.290 "params": { 00:19:40.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.290 "allow_any_host": false, 00:19:40.290 "serial_number": "SPDK00000000000001", 00:19:40.290 "model_number": "SPDK bdev Controller", 00:19:40.290 "max_namespaces": 10, 00:19:40.290 "min_cntlid": 1, 00:19:40.290 "max_cntlid": 65519, 00:19:40.290 "ana_reporting": false 00:19:40.290 } 00:19:40.290 }, 00:19:40.290 { 00:19:40.290 "method": "nvmf_subsystem_add_host", 00:19:40.290 "params": { 00:19:40.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.290 "host": "nqn.2016-06.io.spdk:host1", 00:19:40.290 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:19:40.290 } 00:19:40.290 }, 00:19:40.290 { 00:19:40.290 "method": "nvmf_subsystem_add_ns", 00:19:40.290 "params": { 00:19:40.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.290 "namespace": { 00:19:40.290 "nsid": 1, 00:19:40.290 "bdev_name": "malloc0", 00:19:40.290 "nguid": "580F3E5E458C4F0BB5130495F4A0E69A", 00:19:40.290 "uuid": "580f3e5e-458c-4f0b-b513-0495f4a0e69a" 00:19:40.290 } 00:19:40.290 } 00:19:40.290 }, 00:19:40.290 { 00:19:40.290 "method": "nvmf_subsystem_add_listener", 00:19:40.290 "params": { 00:19:40.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.290 "listen_address": { 00:19:40.290 "trtype": "TCP", 00:19:40.290 "adrfam": "IPv4", 00:19:40.290 "traddr": "10.0.0.2", 00:19:40.290 "trsvcid": "4420" 00:19:40.290 }, 00:19:40.290 "secure_channel": true 00:19:40.290 } 00:19:40.290 } 00:19:40.290 ] 00:19:40.290 } 00:19:40.290 ] 00:19:40.290 }' 00:19:40.290 07:38:44 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:40.290 07:38:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:40.290 07:38:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:40.290 07:38:44 -- common/autotest_common.sh@10 -- # set +x 00:19:40.290 07:38:44 -- nvmf/common.sh@469 -- # nvmfpid=4152582 00:19:40.290 07:38:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:40.290 07:38:44 -- nvmf/common.sh@470 -- # waitforlisten 4152582 00:19:40.290 07:38:44 -- common/autotest_common.sh@819 -- # '[' -z 4152582 ']' 00:19:40.290 07:38:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.290 07:38:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:40.291 07:38:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.291 07:38:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:40.291 07:38:44 -- common/autotest_common.sh@10 -- # set +x 00:19:40.291 [2024-10-07 07:38:44.199047] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:40.291 [2024-10-07 07:38:44.199096] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.291 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.291 [2024-10-07 07:38:44.255229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.548 [2024-10-07 07:38:44.319635] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:40.548 [2024-10-07 07:38:44.319743] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.548 [2024-10-07 07:38:44.319750] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.548 [2024-10-07 07:38:44.319759] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.548 [2024-10-07 07:38:44.319775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.548 [2024-10-07 07:38:44.513331] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.804 [2024-10-07 07:38:44.545372] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:40.804 [2024-10-07 07:38:44.545573] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.061 07:38:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:41.061 07:38:45 -- common/autotest_common.sh@852 -- # return 0 00:19:41.061 07:38:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:41.061 07:38:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:41.061 07:38:45 -- common/autotest_common.sh@10 -- # set +x 00:19:41.320 07:38:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.320 07:38:45 -- target/tls.sh@216 -- # bdevperf_pid=4152825 00:19:41.320 07:38:45 -- target/tls.sh@217 -- # waitforlisten 4152825 /var/tmp/bdevperf.sock 00:19:41.320 07:38:45 -- common/autotest_common.sh@819 -- # '[' -z 4152825 ']' 00:19:41.320 07:38:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.320 07:38:45 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:41.320 07:38:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:41.320 07:38:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.320 07:38:45 -- target/tls.sh@213 -- # echo '{ 00:19:41.320 "subsystems": [ 00:19:41.320 { 00:19:41.320 "subsystem": "iobuf", 00:19:41.320 "config": [ 00:19:41.320 { 00:19:41.320 "method": "iobuf_set_options", 00:19:41.320 "params": { 00:19:41.320 "small_pool_count": 8192, 00:19:41.320 "large_pool_count": 1024, 00:19:41.320 "small_bufsize": 8192, 00:19:41.320 "large_bufsize": 135168 00:19:41.320 } 00:19:41.320 } 00:19:41.320 ] 00:19:41.320 }, 00:19:41.320 { 00:19:41.320 "subsystem": "sock", 00:19:41.320 "config": [ 00:19:41.320 { 00:19:41.320 "method": "sock_impl_set_options", 00:19:41.320 "params": { 00:19:41.320 "impl_name": "posix", 00:19:41.320 "recv_buf_size": 2097152, 00:19:41.320 "send_buf_size": 2097152, 00:19:41.320 "enable_recv_pipe": true, 00:19:41.320 "enable_quickack": false, 00:19:41.320 "enable_placement_id": 0, 00:19:41.320 "enable_zerocopy_send_server": true, 00:19:41.320 "enable_zerocopy_send_client": false, 00:19:41.320 "zerocopy_threshold": 0, 00:19:41.320 "tls_version": 0, 00:19:41.320 "enable_ktls": false 00:19:41.320 } 00:19:41.320 }, 00:19:41.320 { 00:19:41.320 "method": "sock_impl_set_options", 00:19:41.320 "params": { 00:19:41.320 "impl_name": "ssl", 00:19:41.320 "recv_buf_size": 4096, 00:19:41.320 "send_buf_size": 4096, 00:19:41.320 "enable_recv_pipe": true, 00:19:41.320 "enable_quickack": false, 00:19:41.320 "enable_placement_id": 0, 00:19:41.320 "enable_zerocopy_send_server": true, 00:19:41.320 "enable_zerocopy_send_client": false, 00:19:41.320 "zerocopy_threshold": 0, 00:19:41.320 "tls_version": 0, 00:19:41.320 "enable_ktls": false 00:19:41.320 } 00:19:41.320 } 00:19:41.320 ] 00:19:41.320 }, 00:19:41.320 { 00:19:41.320 "subsystem": "vmd", 00:19:41.320 "config": [] 00:19:41.320 }, 00:19:41.320 { 00:19:41.320 "subsystem": "accel", 00:19:41.320 "config": [ 00:19:41.320 { 00:19:41.320 "method": "accel_set_options", 00:19:41.320 "params": { 00:19:41.320 "small_cache_size": 128, 00:19:41.320 "large_cache_size": 16, 00:19:41.320 "task_count": 2048, 00:19:41.320 "sequence_count": 2048, 00:19:41.320 "buf_count": 2048 00:19:41.320 } 00:19:41.320 } 00:19:41.320 ] 00:19:41.320 }, 00:19:41.320 { 00:19:41.320 "subsystem": "bdev", 00:19:41.320 "config": [ 00:19:41.320 { 00:19:41.320 "method": "bdev_set_options", 00:19:41.320 "params": { 00:19:41.320 "bdev_io_pool_size": 65535, 00:19:41.320 "bdev_io_cache_size": 256, 00:19:41.320 "bdev_auto_examine": true, 00:19:41.320 "iobuf_small_cache_size": 128, 00:19:41.320 "iobuf_large_cache_size": 16 00:19:41.320 } 00:19:41.320 }, 00:19:41.320 { 00:19:41.320 "method": "bdev_raid_set_options", 00:19:41.320 "params": { 00:19:41.320 "process_window_size_kb": 1024 00:19:41.320 } 00:19:41.320 }, 00:19:41.320 { 00:19:41.320 "method": "bdev_iscsi_set_options", 00:19:41.320 "params": { 00:19:41.320 "timeout_sec": 30 00:19:41.320 } 00:19:41.320 }, 00:19:41.320 { 00:19:41.320 "method": "bdev_nvme_set_options", 00:19:41.320 "params": { 00:19:41.320 "action_on_timeout": "none", 00:19:41.320 "timeout_us": 0, 00:19:41.320 "timeout_admin_us": 0, 00:19:41.320 "keep_alive_timeout_ms": 10000, 00:19:41.320 "transport_retry_count": 4, 00:19:41.320 "arbitration_burst": 0, 00:19:41.320 "low_priority_weight": 0, 00:19:41.320 "medium_priority_weight": 0, 00:19:41.320 "high_priority_weight": 0, 00:19:41.320 "nvme_adminq_poll_period_us": 10000, 00:19:41.320 "nvme_ioq_poll_period_us": 0, 00:19:41.320 "io_queue_requests": 512, 00:19:41.320 "delay_cmd_submit": true, 00:19:41.320 "bdev_retry_count": 3, 00:19:41.320 "transport_ack_timeout": 0, 00:19:41.320 "ctrlr_loss_timeout_sec": 0, 00:19:41.320 "reconnect_delay_sec": 0, 00:19:41.320 "fast_io_fail_timeout_sec": 0, 00:19:41.320 "generate_uuids": false, 00:19:41.320 "transport_tos": 0, 00:19:41.320 "io_path_stat": false, 00:19:41.320 "allow_accel_sequence": false 00:19:41.320 } 00:19:41.320 }, 00:19:41.320 { 00:19:41.320 "method": "bdev_nvme_attach_controller", 00:19:41.320 "params": { 00:19:41.320 "name": "TLSTEST", 00:19:41.320 "trtype": "TCP", 00:19:41.320 "adrfam": "IPv4", 00:19:41.320 "traddr": "10.0.0.2", 00:19:41.320 "trsvcid": "4420", 00:19:41.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.320 "prchk_reftag": false, 00:19:41.320 "prchk_guard": false, 00:19:41.320 "ctrlr_loss_timeout_sec": 0, 00:19:41.320 "reconnect_delay_sec": 0, 00:19:41.320 "fast_io_fail_timeout_sec": 0, 00:19:41.320 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:41.320 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:41.320 "hdgst": false, 00:19:41.320 "ddgst": false 00:19:41.320 } 00:19:41.320 }, 00:19:41.320 { 00:19:41.320 "method": "bdev_nvme_set_hotplug", 00:19:41.320 "params": { 00:19:41.320 "period_us": 100000, 00:19:41.320 "enable": false 00:19:41.320 } 00:19:41.320 }, 00:19:41.320 { 00:19:41.320 "method": "bdev_wait_for_examine" 00:19:41.320 } 00:19:41.320 ] 00:19:41.320 }, 00:19:41.320 { 00:19:41.320 "subsystem": "nbd", 00:19:41.320 "config": [] 00:19:41.320 } 00:19:41.320 ] 00:19:41.320 }' 00:19:41.320 07:38:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:41.320 07:38:45 -- common/autotest_common.sh@10 -- # set +x 00:19:41.320 [2024-10-07 07:38:45.080478] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:41.320 [2024-10-07 07:38:45.080523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4152825 ] 00:19:41.320 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.320 [2024-10-07 07:38:45.129746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.320 [2024-10-07 07:38:45.197824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.578 [2024-10-07 07:38:45.331500] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.143 07:38:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:42.143 07:38:45 -- common/autotest_common.sh@852 -- # return 0 00:19:42.143 07:38:45 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:42.143 Running I/O for 10 seconds... 00:19:52.106 00:19:52.106 Latency(us) 00:19:52.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.106 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:52.106 Verification LBA range: start 0x0 length 0x2000 00:19:52.106 TLSTESTn1 : 10.01 4997.87 19.52 0.00 0.00 25585.90 5554.96 48434.22 00:19:52.106 =================================================================================================================== 00:19:52.106 Total : 4997.87 19.52 0.00 0.00 25585.90 5554.96 48434.22 00:19:52.106 0 00:19:52.106 07:38:55 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:52.106 07:38:55 -- target/tls.sh@223 -- # killprocess 4152825 00:19:52.106 07:38:55 -- common/autotest_common.sh@926 -- # '[' -z 4152825 ']' 00:19:52.106 07:38:55 -- common/autotest_common.sh@930 -- # kill -0 4152825 00:19:52.106 07:38:55 -- common/autotest_common.sh@931 -- # uname 00:19:52.106 07:38:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:52.106 07:38:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4152825 00:19:52.106 07:38:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:52.106 07:38:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:52.106 07:38:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4152825' 00:19:52.106 killing process with pid 4152825 00:19:52.106 07:38:56 -- common/autotest_common.sh@945 -- # kill 4152825 00:19:52.106 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.106 00:19:52.106 Latency(us) 00:19:52.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.106 =================================================================================================================== 00:19:52.106 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.106 07:38:56 -- common/autotest_common.sh@950 -- # wait 4152825 00:19:52.364 07:38:56 -- target/tls.sh@224 -- # killprocess 4152582 00:19:52.364 07:38:56 -- common/autotest_common.sh@926 -- # '[' -z 4152582 ']' 00:19:52.364 07:38:56 -- common/autotest_common.sh@930 -- # kill -0 4152582 00:19:52.364 07:38:56 -- common/autotest_common.sh@931 -- # uname 00:19:52.364 07:38:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:52.364 07:38:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4152582 00:19:52.364 07:38:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:52.364 07:38:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:52.364 07:38:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4152582' 00:19:52.364 killing process with pid 4152582 00:19:52.364 07:38:56 -- common/autotest_common.sh@945 -- # kill 4152582 00:19:52.364 07:38:56 -- common/autotest_common.sh@950 -- # wait 4152582 00:19:52.623 07:38:56 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:19:52.623 07:38:56 -- target/tls.sh@227 -- # cleanup 00:19:52.623 07:38:56 -- target/tls.sh@15 -- # process_shm --id 0 00:19:52.623 07:38:56 -- common/autotest_common.sh@796 -- # type=--id 00:19:52.623 07:38:56 -- common/autotest_common.sh@797 -- # id=0 00:19:52.623 07:38:56 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:19:52.623 07:38:56 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:52.623 07:38:56 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:19:52.623 07:38:56 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:19:52.623 07:38:56 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:19:52.623 07:38:56 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:52.623 nvmf_trace.0 00:19:52.623 07:38:56 -- common/autotest_common.sh@811 -- # return 0 00:19:52.623 07:38:56 -- target/tls.sh@16 -- # killprocess 4152825 00:19:52.623 07:38:56 -- common/autotest_common.sh@926 -- # '[' -z 4152825 ']' 00:19:52.623 07:38:56 -- common/autotest_common.sh@930 -- # kill -0 4152825 00:19:52.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (4152825) - No such process 00:19:52.623 07:38:56 -- common/autotest_common.sh@953 -- # echo 'Process with pid 4152825 is not found' 00:19:52.623 Process with pid 4152825 is not found 00:19:52.623 07:38:56 -- target/tls.sh@17 -- # nvmftestfini 00:19:52.623 07:38:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:52.623 07:38:56 -- nvmf/common.sh@116 -- # sync 00:19:52.881 07:38:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:52.881 07:38:56 -- nvmf/common.sh@119 -- # set +e 00:19:52.881 07:38:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:52.881 07:38:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:52.881 rmmod nvme_tcp 00:19:52.881 rmmod nvme_fabrics 00:19:52.881 rmmod nvme_keyring 00:19:52.881 07:38:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:52.881 07:38:56 -- nvmf/common.sh@123 -- # set -e 00:19:52.881 07:38:56 -- nvmf/common.sh@124 -- # return 0 00:19:52.881 07:38:56 -- nvmf/common.sh@477 -- # '[' -n 4152582 ']' 00:19:52.882 07:38:56 -- nvmf/common.sh@478 -- # killprocess 4152582 00:19:52.882 07:38:56 -- common/autotest_common.sh@926 -- # '[' -z 4152582 ']' 00:19:52.882 07:38:56 -- common/autotest_common.sh@930 -- # kill -0 4152582 00:19:52.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (4152582) - No such process 00:19:52.882 07:38:56 -- common/autotest_common.sh@953 -- # echo 'Process with pid 4152582 is not found' 00:19:52.882 Process with pid 4152582 is not found 00:19:52.882 07:38:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:52.882 07:38:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:52.882 07:38:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:52.882 07:38:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:52.882 07:38:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:52.882 07:38:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.882 07:38:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.882 07:38:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.783 07:38:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:54.783 07:38:58 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:54.783 00:19:54.783 real 1m12.704s 00:19:54.783 user 1m46.797s 00:19:54.783 sys 0m28.482s 00:19:54.783 07:38:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.783 07:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:54.783 ************************************ 00:19:54.783 END TEST nvmf_tls 00:19:54.783 ************************************ 00:19:54.783 07:38:58 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:54.783 07:38:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:54.783 07:38:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:54.783 07:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:55.042 ************************************ 00:19:55.042 START TEST nvmf_fips 00:19:55.042 ************************************ 00:19:55.042 07:38:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:55.042 * Looking for test storage... 00:19:55.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:55.042 07:38:58 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:55.042 07:38:58 -- nvmf/common.sh@7 -- # uname -s 00:19:55.042 07:38:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.042 07:38:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.042 07:38:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.042 07:38:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.042 07:38:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.042 07:38:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.042 07:38:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.042 07:38:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.042 07:38:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.042 07:38:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.042 07:38:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:55.042 07:38:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:55.042 07:38:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.042 07:38:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.042 07:38:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:55.042 07:38:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:55.042 07:38:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.042 07:38:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.042 07:38:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.043 07:38:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.043 07:38:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.043 07:38:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.043 07:38:58 -- paths/export.sh@5 -- # export PATH 00:19:55.043 07:38:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.043 07:38:58 -- nvmf/common.sh@46 -- # : 0 00:19:55.043 07:38:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:55.043 07:38:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:55.043 07:38:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:55.043 07:38:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.043 07:38:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.043 07:38:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:55.043 07:38:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:55.043 07:38:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:55.043 07:38:58 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:55.043 07:38:58 -- fips/fips.sh@89 -- # check_openssl_version 00:19:55.043 07:38:58 -- fips/fips.sh@83 -- # local target=3.0.0 00:19:55.043 07:38:58 -- fips/fips.sh@85 -- # openssl version 00:19:55.043 07:38:58 -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:55.043 07:38:58 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:19:55.043 07:38:58 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:55.043 07:38:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:55.043 07:38:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:55.043 07:38:58 -- scripts/common.sh@335 -- # IFS=.-: 00:19:55.043 07:38:58 -- scripts/common.sh@335 -- # read -ra ver1 00:19:55.043 07:38:58 -- scripts/common.sh@336 -- # IFS=.-: 00:19:55.043 07:38:58 -- scripts/common.sh@336 -- # read -ra ver2 00:19:55.043 07:38:58 -- scripts/common.sh@337 -- # local 'op=>=' 00:19:55.043 07:38:58 -- scripts/common.sh@339 -- # ver1_l=3 00:19:55.043 07:38:58 -- scripts/common.sh@340 -- # ver2_l=3 00:19:55.043 07:38:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:55.043 07:38:58 -- scripts/common.sh@343 -- # case "$op" in 00:19:55.043 07:38:58 -- scripts/common.sh@347 -- # : 1 00:19:55.043 07:38:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:55.043 07:38:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:55.043 07:38:58 -- scripts/common.sh@364 -- # decimal 3 00:19:55.043 07:38:58 -- scripts/common.sh@352 -- # local d=3 00:19:55.043 07:38:58 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:55.043 07:38:58 -- scripts/common.sh@354 -- # echo 3 00:19:55.043 07:38:58 -- scripts/common.sh@364 -- # ver1[v]=3 00:19:55.043 07:38:58 -- scripts/common.sh@365 -- # decimal 3 00:19:55.043 07:38:58 -- scripts/common.sh@352 -- # local d=3 00:19:55.043 07:38:58 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:55.043 07:38:58 -- scripts/common.sh@354 -- # echo 3 00:19:55.043 07:38:58 -- scripts/common.sh@365 -- # ver2[v]=3 00:19:55.043 07:38:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:55.043 07:38:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:55.043 07:38:58 -- scripts/common.sh@363 -- # (( v++ )) 00:19:55.043 07:38:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:55.043 07:38:58 -- scripts/common.sh@364 -- # decimal 1 00:19:55.043 07:38:58 -- scripts/common.sh@352 -- # local d=1 00:19:55.043 07:38:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:55.043 07:38:58 -- scripts/common.sh@354 -- # echo 1 00:19:55.043 07:38:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:55.043 07:38:58 -- scripts/common.sh@365 -- # decimal 0 00:19:55.043 07:38:58 -- scripts/common.sh@352 -- # local d=0 00:19:55.043 07:38:58 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:55.043 07:38:58 -- scripts/common.sh@354 -- # echo 0 00:19:55.043 07:38:58 -- scripts/common.sh@365 -- # ver2[v]=0 00:19:55.043 07:38:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:55.043 07:38:58 -- scripts/common.sh@366 -- # return 0 00:19:55.043 07:38:58 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:55.043 07:38:58 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:55.043 07:38:58 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:55.043 07:38:58 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:55.043 07:38:58 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:55.043 07:38:58 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:55.043 07:38:58 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:55.043 07:38:58 -- fips/fips.sh@113 -- # build_openssl_config 00:19:55.043 07:38:58 -- fips/fips.sh@37 -- # cat 00:19:55.043 07:38:58 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:55.043 07:38:58 -- fips/fips.sh@58 -- # cat - 00:19:55.043 07:38:58 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:55.043 07:38:58 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:55.043 07:38:58 -- fips/fips.sh@116 -- # mapfile -t providers 00:19:55.043 07:38:58 -- fips/fips.sh@116 -- # openssl list -providers 00:19:55.043 07:38:58 -- fips/fips.sh@116 -- # grep name 00:19:55.043 07:38:58 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:55.043 07:38:58 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:55.043 07:38:58 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:55.043 07:38:58 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:55.043 07:38:58 -- common/autotest_common.sh@640 -- # local es=0 00:19:55.043 07:38:58 -- fips/fips.sh@127 -- # : 00:19:55.043 07:38:58 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:55.043 07:38:58 -- common/autotest_common.sh@628 -- # local arg=openssl 00:19:55.043 07:38:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:55.043 07:38:58 -- common/autotest_common.sh@632 -- # type -t openssl 00:19:55.043 07:38:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:55.043 07:38:58 -- common/autotest_common.sh@634 -- # type -P openssl 00:19:55.043 07:38:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:55.043 07:38:58 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:19:55.043 07:38:58 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:19:55.043 07:38:58 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:19:55.302 Error setting digest 00:19:55.302 40C24911ED7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:55.302 40C24911ED7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:55.302 07:38:59 -- common/autotest_common.sh@643 -- # es=1 00:19:55.302 07:38:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:55.302 07:38:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:55.302 07:38:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:55.302 07:38:59 -- fips/fips.sh@130 -- # nvmftestinit 00:19:55.302 07:38:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:55.302 07:38:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.302 07:38:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:55.302 07:38:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:55.302 07:38:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:55.302 07:38:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.302 07:38:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.302 07:38:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.302 07:38:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:55.302 07:38:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:55.302 07:38:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:55.302 07:38:59 -- common/autotest_common.sh@10 -- # set +x 00:20:00.564 07:39:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:00.564 07:39:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:00.564 07:39:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:00.564 07:39:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:00.564 07:39:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:00.564 07:39:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:00.564 07:39:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:00.564 07:39:04 -- nvmf/common.sh@294 -- # net_devs=() 00:20:00.564 07:39:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:00.564 07:39:04 -- nvmf/common.sh@295 -- # e810=() 00:20:00.564 07:39:04 -- nvmf/common.sh@295 -- # local -ga e810 00:20:00.564 07:39:04 -- nvmf/common.sh@296 -- # x722=() 00:20:00.564 07:39:04 -- nvmf/common.sh@296 -- # local -ga x722 00:20:00.564 07:39:04 -- nvmf/common.sh@297 -- # mlx=() 00:20:00.564 07:39:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:00.564 07:39:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.564 07:39:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.564 07:39:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.564 07:39:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.564 07:39:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.564 07:39:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.564 07:39:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.564 07:39:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.564 07:39:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.564 07:39:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.564 07:39:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.564 07:39:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:00.564 07:39:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:00.564 07:39:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:00.564 07:39:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:00.564 07:39:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:00.565 07:39:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:00.565 07:39:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:00.565 07:39:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:00.565 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:00.565 07:39:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:00.565 07:39:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:00.565 07:39:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.565 07:39:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.565 07:39:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:00.565 07:39:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:00.565 07:39:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:00.565 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:00.565 07:39:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:00.565 07:39:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:00.565 07:39:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.565 07:39:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.565 07:39:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:00.565 07:39:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:00.565 07:39:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:00.565 07:39:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:00.565 07:39:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:00.565 07:39:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.565 07:39:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:00.565 07:39:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.565 07:39:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:00.565 Found net devices under 0000:af:00.0: cvl_0_0 00:20:00.565 07:39:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.565 07:39:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:00.565 07:39:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.565 07:39:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:00.565 07:39:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.565 07:39:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:00.565 Found net devices under 0000:af:00.1: cvl_0_1 00:20:00.565 07:39:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.565 07:39:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:00.565 07:39:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:00.565 07:39:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:00.565 07:39:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:00.565 07:39:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:00.565 07:39:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.565 07:39:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.565 07:39:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.565 07:39:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:00.565 07:39:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.565 07:39:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.565 07:39:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:00.565 07:39:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.565 07:39:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.565 07:39:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:00.565 07:39:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:00.565 07:39:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.565 07:39:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.565 07:39:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.565 07:39:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.565 07:39:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:00.565 07:39:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.823 07:39:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.823 07:39:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.823 07:39:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:00.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:20:00.823 00:20:00.823 --- 10.0.0.2 ping statistics --- 00:20:00.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.823 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:20:00.823 07:39:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:20:00.823 00:20:00.823 --- 10.0.0.1 ping statistics --- 00:20:00.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.823 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:20:00.823 07:39:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.823 07:39:04 -- nvmf/common.sh@410 -- # return 0 00:20:00.823 07:39:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:00.823 07:39:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.823 07:39:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:00.823 07:39:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:00.823 07:39:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.823 07:39:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:00.823 07:39:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:00.823 07:39:04 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:00.823 07:39:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:00.823 07:39:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:00.823 07:39:04 -- common/autotest_common.sh@10 -- # set +x 00:20:00.823 07:39:04 -- nvmf/common.sh@469 -- # nvmfpid=4158258 00:20:00.824 07:39:04 -- nvmf/common.sh@470 -- # waitforlisten 4158258 00:20:00.824 07:39:04 -- common/autotest_common.sh@819 -- # '[' -z 4158258 ']' 00:20:00.824 07:39:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.824 07:39:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:00.824 07:39:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.824 07:39:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:00.824 07:39:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:00.824 07:39:04 -- common/autotest_common.sh@10 -- # set +x 00:20:00.824 [2024-10-07 07:39:04.680927] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:00.824 [2024-10-07 07:39:04.680971] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.824 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.824 [2024-10-07 07:39:04.738708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.082 [2024-10-07 07:39:04.813016] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:01.082 [2024-10-07 07:39:04.813139] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.082 [2024-10-07 07:39:04.813147] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.082 [2024-10-07 07:39:04.813153] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.082 [2024-10-07 07:39:04.813175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.647 07:39:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:01.647 07:39:05 -- common/autotest_common.sh@852 -- # return 0 00:20:01.647 07:39:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:01.647 07:39:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:01.647 07:39:05 -- common/autotest_common.sh@10 -- # set +x 00:20:01.647 07:39:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.647 07:39:05 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:01.647 07:39:05 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:01.647 07:39:05 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:01.647 07:39:05 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:01.647 07:39:05 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:01.647 07:39:05 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:01.647 07:39:05 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:01.647 07:39:05 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:01.906 [2024-10-07 07:39:05.666154] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.906 [2024-10-07 07:39:05.682162] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.906 [2024-10-07 07:39:05.682368] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.906 malloc0 00:20:01.906 07:39:05 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:01.906 07:39:05 -- fips/fips.sh@147 -- # bdevperf_pid=4158380 00:20:01.906 07:39:05 -- fips/fips.sh@148 -- # waitforlisten 4158380 /var/tmp/bdevperf.sock 00:20:01.906 07:39:05 -- common/autotest_common.sh@819 -- # '[' -z 4158380 ']' 00:20:01.906 07:39:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.906 07:39:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:01.906 07:39:05 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:01.906 07:39:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.906 07:39:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:01.906 07:39:05 -- common/autotest_common.sh@10 -- # set +x 00:20:01.906 [2024-10-07 07:39:05.794869] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:01.906 [2024-10-07 07:39:05.794918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4158380 ] 00:20:01.906 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.906 [2024-10-07 07:39:05.844875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.164 [2024-10-07 07:39:05.920678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.729 07:39:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:02.729 07:39:06 -- common/autotest_common.sh@852 -- # return 0 00:20:02.729 07:39:06 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:02.987 [2024-10-07 07:39:06.759321] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.987 TLSTESTn1 00:20:02.987 07:39:06 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.987 Running I/O for 10 seconds... 00:20:13.084 00:20:13.084 Latency(us) 00:20:13.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.084 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:13.085 Verification LBA range: start 0x0 length 0x2000 00:20:13.085 TLSTESTn1 : 10.03 4799.30 18.75 0.00 0.00 26627.40 4618.73 46686.60 00:20:13.085 =================================================================================================================== 00:20:13.085 Total : 4799.30 18.75 0.00 0.00 26627.40 4618.73 46686.60 00:20:13.085 0 00:20:13.085 07:39:17 -- fips/fips.sh@1 -- # cleanup 00:20:13.085 07:39:17 -- fips/fips.sh@15 -- # process_shm --id 0 00:20:13.085 07:39:17 -- common/autotest_common.sh@796 -- # type=--id 00:20:13.085 07:39:17 -- common/autotest_common.sh@797 -- # id=0 00:20:13.085 07:39:17 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:20:13.085 07:39:17 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:13.085 07:39:17 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:20:13.085 07:39:17 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:20:13.085 07:39:17 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:20:13.085 07:39:17 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:13.085 nvmf_trace.0 00:20:13.342 07:39:17 -- common/autotest_common.sh@811 -- # return 0 00:20:13.342 07:39:17 -- fips/fips.sh@16 -- # killprocess 4158380 00:20:13.342 07:39:17 -- common/autotest_common.sh@926 -- # '[' -z 4158380 ']' 00:20:13.342 07:39:17 -- common/autotest_common.sh@930 -- # kill -0 4158380 00:20:13.342 07:39:17 -- common/autotest_common.sh@931 -- # uname 00:20:13.342 07:39:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:13.342 07:39:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4158380 00:20:13.342 07:39:17 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:13.342 07:39:17 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:13.342 07:39:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4158380' 00:20:13.342 killing process with pid 4158380 00:20:13.342 07:39:17 -- common/autotest_common.sh@945 -- # kill 4158380 00:20:13.342 Received shutdown signal, test time was about 10.000000 seconds 00:20:13.342 00:20:13.342 Latency(us) 00:20:13.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.342 =================================================================================================================== 00:20:13.342 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:13.342 07:39:17 -- common/autotest_common.sh@950 -- # wait 4158380 00:20:13.600 07:39:17 -- fips/fips.sh@17 -- # nvmftestfini 00:20:13.600 07:39:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:13.600 07:39:17 -- nvmf/common.sh@116 -- # sync 00:20:13.600 07:39:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:13.600 07:39:17 -- nvmf/common.sh@119 -- # set +e 00:20:13.600 07:39:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:13.600 07:39:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:13.600 rmmod nvme_tcp 00:20:13.600 rmmod nvme_fabrics 00:20:13.600 rmmod nvme_keyring 00:20:13.600 07:39:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:13.600 07:39:17 -- nvmf/common.sh@123 -- # set -e 00:20:13.600 07:39:17 -- nvmf/common.sh@124 -- # return 0 00:20:13.600 07:39:17 -- nvmf/common.sh@477 -- # '[' -n 4158258 ']' 00:20:13.600 07:39:17 -- nvmf/common.sh@478 -- # killprocess 4158258 00:20:13.600 07:39:17 -- common/autotest_common.sh@926 -- # '[' -z 4158258 ']' 00:20:13.600 07:39:17 -- common/autotest_common.sh@930 -- # kill -0 4158258 00:20:13.600 07:39:17 -- common/autotest_common.sh@931 -- # uname 00:20:13.600 07:39:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:13.600 07:39:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4158258 00:20:13.600 07:39:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:13.600 07:39:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:13.600 07:39:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4158258' 00:20:13.600 killing process with pid 4158258 00:20:13.600 07:39:17 -- common/autotest_common.sh@945 -- # kill 4158258 00:20:13.600 07:39:17 -- common/autotest_common.sh@950 -- # wait 4158258 00:20:13.857 07:39:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:13.857 07:39:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:13.857 07:39:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:13.857 07:39:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:13.857 07:39:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:13.857 07:39:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.857 07:39:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.857 07:39:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.756 07:39:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:15.756 07:39:19 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:15.756 00:20:15.756 real 0m20.967s 00:20:15.756 user 0m21.855s 00:20:15.756 sys 0m9.963s 00:20:15.756 07:39:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.756 07:39:19 -- common/autotest_common.sh@10 -- # set +x 00:20:15.756 ************************************ 00:20:15.756 END TEST nvmf_fips 00:20:15.756 ************************************ 00:20:16.014 07:39:19 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:20:16.014 07:39:19 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:16.014 07:39:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:16.014 07:39:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:16.015 07:39:19 -- common/autotest_common.sh@10 -- # set +x 00:20:16.015 ************************************ 00:20:16.015 START TEST nvmf_fuzz 00:20:16.015 ************************************ 00:20:16.015 07:39:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:16.015 * Looking for test storage... 00:20:16.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:16.015 07:39:19 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.015 07:39:19 -- nvmf/common.sh@7 -- # uname -s 00:20:16.015 07:39:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.015 07:39:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.015 07:39:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.015 07:39:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.015 07:39:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.015 07:39:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.015 07:39:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.015 07:39:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.015 07:39:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.015 07:39:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.015 07:39:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:16.015 07:39:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:16.015 07:39:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.015 07:39:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.015 07:39:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.015 07:39:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.015 07:39:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.015 07:39:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.015 07:39:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.015 07:39:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.015 07:39:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.015 07:39:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.015 07:39:19 -- paths/export.sh@5 -- # export PATH 00:20:16.015 07:39:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.015 07:39:19 -- nvmf/common.sh@46 -- # : 0 00:20:16.015 07:39:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:16.015 07:39:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:16.015 07:39:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:16.015 07:39:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.015 07:39:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.015 07:39:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:16.015 07:39:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:16.015 07:39:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:16.015 07:39:19 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:16.015 07:39:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:16.015 07:39:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.015 07:39:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:16.015 07:39:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:16.015 07:39:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:16.015 07:39:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.015 07:39:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.015 07:39:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.015 07:39:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:16.015 07:39:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:16.015 07:39:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:16.015 07:39:19 -- common/autotest_common.sh@10 -- # set +x 00:20:21.284 07:39:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:21.284 07:39:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:21.284 07:39:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:21.284 07:39:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:21.284 07:39:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:21.284 07:39:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:21.284 07:39:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:21.284 07:39:25 -- nvmf/common.sh@294 -- # net_devs=() 00:20:21.284 07:39:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:21.284 07:39:25 -- nvmf/common.sh@295 -- # e810=() 00:20:21.284 07:39:25 -- nvmf/common.sh@295 -- # local -ga e810 00:20:21.284 07:39:25 -- nvmf/common.sh@296 -- # x722=() 00:20:21.284 07:39:25 -- nvmf/common.sh@296 -- # local -ga x722 00:20:21.284 07:39:25 -- nvmf/common.sh@297 -- # mlx=() 00:20:21.284 07:39:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:21.284 07:39:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.284 07:39:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.284 07:39:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.284 07:39:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.284 07:39:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.284 07:39:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.284 07:39:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.284 07:39:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.284 07:39:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.284 07:39:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.284 07:39:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.284 07:39:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:21.284 07:39:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:21.284 07:39:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:21.284 07:39:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:21.284 07:39:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:21.284 07:39:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:21.284 07:39:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:21.284 07:39:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:21.284 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:21.284 07:39:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:21.284 07:39:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:21.284 07:39:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.284 07:39:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.284 07:39:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:21.284 07:39:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:21.284 07:39:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:21.284 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:21.284 07:39:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:21.285 07:39:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:21.285 07:39:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.285 07:39:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.285 07:39:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:21.285 07:39:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:21.285 07:39:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:21.285 07:39:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:21.285 07:39:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:21.285 07:39:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.285 07:39:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:21.285 07:39:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.285 07:39:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:21.285 Found net devices under 0000:af:00.0: cvl_0_0 00:20:21.285 07:39:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.285 07:39:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:21.285 07:39:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.285 07:39:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:21.285 07:39:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.285 07:39:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:21.285 Found net devices under 0000:af:00.1: cvl_0_1 00:20:21.285 07:39:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.285 07:39:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:21.285 07:39:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:21.285 07:39:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:21.285 07:39:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:21.285 07:39:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:21.285 07:39:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.285 07:39:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.285 07:39:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.285 07:39:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:21.285 07:39:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.285 07:39:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.285 07:39:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:21.285 07:39:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.285 07:39:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.285 07:39:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:21.285 07:39:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:21.285 07:39:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.285 07:39:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.285 07:39:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.285 07:39:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.285 07:39:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:21.285 07:39:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.285 07:39:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.545 07:39:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.545 07:39:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:21.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:20:21.545 00:20:21.545 --- 10.0.0.2 ping statistics --- 00:20:21.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.545 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:20:21.545 07:39:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:20:21.545 00:20:21.545 --- 10.0.0.1 ping statistics --- 00:20:21.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.545 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:20:21.545 07:39:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.545 07:39:25 -- nvmf/common.sh@410 -- # return 0 00:20:21.545 07:39:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:21.545 07:39:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.545 07:39:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:21.545 07:39:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:21.545 07:39:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.545 07:39:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:21.545 07:39:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:21.545 07:39:25 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=4164119 00:20:21.545 07:39:25 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:21.545 07:39:25 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:21.545 07:39:25 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 4164119 00:20:21.545 07:39:25 -- common/autotest_common.sh@819 -- # '[' -z 4164119 ']' 00:20:21.545 07:39:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.545 07:39:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:21.545 07:39:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.545 07:39:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:21.545 07:39:25 -- common/autotest_common.sh@10 -- # set +x 00:20:22.482 07:39:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:22.483 07:39:26 -- common/autotest_common.sh@852 -- # return 0 00:20:22.483 07:39:26 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:22.483 07:39:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.483 07:39:26 -- common/autotest_common.sh@10 -- # set +x 00:20:22.483 07:39:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.483 07:39:26 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:22.483 07:39:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.483 07:39:26 -- common/autotest_common.sh@10 -- # set +x 00:20:22.483 Malloc0 00:20:22.483 07:39:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.483 07:39:26 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:22.483 07:39:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.483 07:39:26 -- common/autotest_common.sh@10 -- # set +x 00:20:22.483 07:39:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.483 07:39:26 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:22.483 07:39:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.483 07:39:26 -- common/autotest_common.sh@10 -- # set +x 00:20:22.483 07:39:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.483 07:39:26 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.483 07:39:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.483 07:39:26 -- common/autotest_common.sh@10 -- # set +x 00:20:22.483 07:39:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.483 07:39:26 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:20:22.483 07:39:26 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:20:54.553 Fuzzing completed. Shutting down the fuzz application 00:20:54.553 00:20:54.553 Dumping successful admin opcodes: 00:20:54.553 8, 9, 10, 24, 00:20:54.553 Dumping successful io opcodes: 00:20:54.553 0, 9, 00:20:54.553 NS: 0x200003aeff00 I/O qp, Total commands completed: 894247, total successful commands: 5205, random_seed: 1176491520 00:20:54.553 NS: 0x200003aeff00 admin qp, Total commands completed: 86665, total successful commands: 692, random_seed: 1485105216 00:20:54.553 07:39:56 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:20:54.553 Fuzzing completed. Shutting down the fuzz application 00:20:54.553 00:20:54.553 Dumping successful admin opcodes: 00:20:54.553 24, 00:20:54.553 Dumping successful io opcodes: 00:20:54.553 00:20:54.553 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3899846792 00:20:54.554 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3899924226 00:20:54.554 07:39:57 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:54.554 07:39:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:54.554 07:39:57 -- common/autotest_common.sh@10 -- # set +x 00:20:54.554 07:39:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:54.554 07:39:57 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:54.554 07:39:57 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:20:54.554 07:39:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:54.554 07:39:57 -- nvmf/common.sh@116 -- # sync 00:20:54.554 07:39:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:54.554 07:39:57 -- nvmf/common.sh@119 -- # set +e 00:20:54.554 07:39:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:54.554 07:39:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:54.554 rmmod nvme_tcp 00:20:54.554 rmmod nvme_fabrics 00:20:54.554 rmmod nvme_keyring 00:20:54.554 07:39:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:54.554 07:39:58 -- nvmf/common.sh@123 -- # set -e 00:20:54.554 07:39:58 -- nvmf/common.sh@124 -- # return 0 00:20:54.554 07:39:58 -- nvmf/common.sh@477 -- # '[' -n 4164119 ']' 00:20:54.554 07:39:58 -- nvmf/common.sh@478 -- # killprocess 4164119 00:20:54.554 07:39:58 -- common/autotest_common.sh@926 -- # '[' -z 4164119 ']' 00:20:54.554 07:39:58 -- common/autotest_common.sh@930 -- # kill -0 4164119 00:20:54.554 07:39:58 -- common/autotest_common.sh@931 -- # uname 00:20:54.554 07:39:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:54.554 07:39:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4164119 00:20:54.555 07:39:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:54.555 07:39:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:54.555 07:39:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4164119' 00:20:54.555 killing process with pid 4164119 00:20:54.555 07:39:58 -- common/autotest_common.sh@945 -- # kill 4164119 00:20:54.555 07:39:58 -- common/autotest_common.sh@950 -- # wait 4164119 00:20:54.555 07:39:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:54.555 07:39:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:54.555 07:39:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:54.555 07:39:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:54.555 07:39:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:54.556 07:39:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.556 07:39:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.556 07:39:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.459 07:40:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:56.459 07:40:00 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:20:56.459 00:20:56.459 real 0m40.622s 00:20:56.459 user 0m52.846s 00:20:56.459 sys 0m17.480s 00:20:56.459 07:40:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:56.459 07:40:00 -- common/autotest_common.sh@10 -- # set +x 00:20:56.459 ************************************ 00:20:56.459 END TEST nvmf_fuzz 00:20:56.459 ************************************ 00:20:56.459 07:40:00 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:56.459 07:40:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:56.459 07:40:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:56.459 07:40:00 -- common/autotest_common.sh@10 -- # set +x 00:20:56.459 ************************************ 00:20:56.459 START TEST nvmf_multiconnection 00:20:56.459 ************************************ 00:20:56.459 07:40:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:56.719 * Looking for test storage... 00:20:56.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:56.719 07:40:00 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.719 07:40:00 -- nvmf/common.sh@7 -- # uname -s 00:20:56.719 07:40:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.719 07:40:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.719 07:40:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.719 07:40:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.719 07:40:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.719 07:40:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.719 07:40:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.719 07:40:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.719 07:40:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.719 07:40:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.719 07:40:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:56.719 07:40:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:56.719 07:40:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.719 07:40:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.719 07:40:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.719 07:40:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:56.719 07:40:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.719 07:40:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.719 07:40:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.719 07:40:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.719 07:40:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.719 07:40:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.719 07:40:00 -- paths/export.sh@5 -- # export PATH 00:20:56.719 07:40:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.719 07:40:00 -- nvmf/common.sh@46 -- # : 0 00:20:56.719 07:40:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:56.719 07:40:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:56.719 07:40:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:56.719 07:40:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.719 07:40:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.719 07:40:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:56.719 07:40:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:56.719 07:40:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:56.719 07:40:00 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:56.719 07:40:00 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:56.719 07:40:00 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:20:56.719 07:40:00 -- target/multiconnection.sh@16 -- # nvmftestinit 00:20:56.719 07:40:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:56.719 07:40:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.719 07:40:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:56.720 07:40:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:56.720 07:40:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:56.720 07:40:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.720 07:40:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.720 07:40:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.720 07:40:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:56.720 07:40:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:56.720 07:40:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:56.720 07:40:00 -- common/autotest_common.sh@10 -- # set +x 00:21:01.994 07:40:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:01.994 07:40:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:01.994 07:40:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:01.994 07:40:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:01.994 07:40:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:01.994 07:40:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:01.994 07:40:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:01.994 07:40:05 -- nvmf/common.sh@294 -- # net_devs=() 00:21:01.994 07:40:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:01.994 07:40:05 -- nvmf/common.sh@295 -- # e810=() 00:21:01.994 07:40:05 -- nvmf/common.sh@295 -- # local -ga e810 00:21:01.994 07:40:05 -- nvmf/common.sh@296 -- # x722=() 00:21:01.994 07:40:05 -- nvmf/common.sh@296 -- # local -ga x722 00:21:01.994 07:40:05 -- nvmf/common.sh@297 -- # mlx=() 00:21:01.994 07:40:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:01.994 07:40:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.994 07:40:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.994 07:40:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.994 07:40:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.994 07:40:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.994 07:40:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.994 07:40:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.994 07:40:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.994 07:40:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.994 07:40:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.994 07:40:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.994 07:40:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:01.994 07:40:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:01.994 07:40:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:01.994 07:40:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:01.994 07:40:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:01.994 07:40:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:01.994 07:40:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:01.994 07:40:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:01.994 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:01.994 07:40:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:01.994 07:40:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:01.994 07:40:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.994 07:40:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.994 07:40:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:01.994 07:40:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:01.994 07:40:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:01.994 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:01.994 07:40:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:01.994 07:40:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:01.994 07:40:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.995 07:40:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.995 07:40:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:01.995 07:40:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:01.995 07:40:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:01.995 07:40:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:01.995 07:40:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:01.995 07:40:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.995 07:40:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:01.995 07:40:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.995 07:40:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:01.995 Found net devices under 0000:af:00.0: cvl_0_0 00:21:01.995 07:40:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.995 07:40:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:01.995 07:40:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.995 07:40:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:01.995 07:40:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.995 07:40:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:01.995 Found net devices under 0000:af:00.1: cvl_0_1 00:21:01.995 07:40:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.995 07:40:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:01.995 07:40:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:01.995 07:40:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:01.995 07:40:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:01.995 07:40:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:01.995 07:40:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.995 07:40:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.995 07:40:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.995 07:40:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:01.995 07:40:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.995 07:40:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.995 07:40:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:01.995 07:40:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.995 07:40:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.995 07:40:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:01.995 07:40:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:01.995 07:40:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.995 07:40:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.995 07:40:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.995 07:40:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.995 07:40:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:01.995 07:40:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.995 07:40:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.995 07:40:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.995 07:40:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:01.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:21:01.995 00:21:01.995 --- 10.0.0.2 ping statistics --- 00:21:01.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.995 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:21:01.995 07:40:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:21:01.995 00:21:01.995 --- 10.0.0.1 ping statistics --- 00:21:01.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.995 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:21:01.995 07:40:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.995 07:40:05 -- nvmf/common.sh@410 -- # return 0 00:21:01.995 07:40:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:01.995 07:40:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.995 07:40:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:01.995 07:40:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:01.995 07:40:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.995 07:40:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:01.995 07:40:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:01.995 07:40:05 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:01.995 07:40:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:01.995 07:40:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:01.995 07:40:05 -- common/autotest_common.sh@10 -- # set +x 00:21:01.995 07:40:05 -- nvmf/common.sh@469 -- # nvmfpid=4172805 00:21:01.995 07:40:05 -- nvmf/common.sh@470 -- # waitforlisten 4172805 00:21:01.995 07:40:05 -- common/autotest_common.sh@819 -- # '[' -z 4172805 ']' 00:21:01.995 07:40:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.995 07:40:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:01.995 07:40:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.995 07:40:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:01.995 07:40:05 -- common/autotest_common.sh@10 -- # set +x 00:21:01.995 07:40:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:01.995 [2024-10-07 07:40:05.746803] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:01.995 [2024-10-07 07:40:05.746846] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.995 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.995 [2024-10-07 07:40:05.805606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:01.995 [2024-10-07 07:40:05.883400] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:01.995 [2024-10-07 07:40:05.883506] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.995 [2024-10-07 07:40:05.883514] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.995 [2024-10-07 07:40:05.883520] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.995 [2024-10-07 07:40:05.883562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.995 [2024-10-07 07:40:05.883580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.995 [2024-10-07 07:40:05.883670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:01.995 [2024-10-07 07:40:05.883671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.933 07:40:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:02.933 07:40:06 -- common/autotest_common.sh@852 -- # return 0 00:21:02.933 07:40:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:02.933 07:40:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:02.933 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.933 07:40:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.933 07:40:06 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:02.933 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.933 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.933 [2024-10-07 07:40:06.607370] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.933 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.933 07:40:06 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:02.933 07:40:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:02.933 07:40:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:02.933 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.933 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.933 Malloc1 00:21:02.933 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.933 07:40:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:02.933 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.933 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 [2024-10-07 07:40:06.666859] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:02.934 07:40:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 Malloc2 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:02.934 07:40:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 Malloc3 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:02.934 07:40:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 Malloc4 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:02.934 07:40:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 Malloc5 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:02.934 07:40:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 Malloc6 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.934 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:02.934 07:40:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:02.934 07:40:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:02.934 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:02.934 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 Malloc7 00:21:03.194 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.194 07:40:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:03.194 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.194 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.194 07:40:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:03.194 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.194 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.194 07:40:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:03.194 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.194 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.194 07:40:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:03.194 07:40:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:03.194 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.194 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 Malloc8 00:21:03.194 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.194 07:40:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:03.194 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.194 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.194 07:40:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:03.194 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.194 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.194 07:40:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:03.194 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.194 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 07:40:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.194 07:40:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:03.194 07:40:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:03.194 07:40:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.194 07:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 Malloc9 00:21:03.194 07:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.194 07:40:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:03.194 07:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.194 07:40:07 -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 07:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.194 07:40:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:03.194 07:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.194 07:40:07 -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 07:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.194 07:40:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:03.194 07:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.194 07:40:07 -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 07:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.194 07:40:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:03.194 07:40:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:03.194 07:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.194 07:40:07 -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 Malloc10 00:21:03.194 07:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.194 07:40:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:03.194 07:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.194 07:40:07 -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 07:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.194 07:40:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:03.194 07:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.194 07:40:07 -- common/autotest_common.sh@10 -- # set +x 00:21:03.194 07:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.195 07:40:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:03.195 07:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.195 07:40:07 -- common/autotest_common.sh@10 -- # set +x 00:21:03.195 07:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.195 07:40:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:03.195 07:40:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:03.195 07:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.195 07:40:07 -- common/autotest_common.sh@10 -- # set +x 00:21:03.195 Malloc11 00:21:03.195 07:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.195 07:40:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:03.195 07:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.195 07:40:07 -- common/autotest_common.sh@10 -- # set +x 00:21:03.195 07:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.195 07:40:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:03.195 07:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.195 07:40:07 -- common/autotest_common.sh@10 -- # set +x 00:21:03.195 07:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.195 07:40:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:03.195 07:40:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.195 07:40:07 -- common/autotest_common.sh@10 -- # set +x 00:21:03.195 07:40:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.195 07:40:07 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:03.195 07:40:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:03.195 07:40:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:04.571 07:40:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:04.571 07:40:08 -- common/autotest_common.sh@1177 -- # local i=0 00:21:04.571 07:40:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:04.571 07:40:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:04.571 07:40:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:06.482 07:40:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:06.482 07:40:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:06.482 07:40:10 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:21:06.482 07:40:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:06.482 07:40:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:06.482 07:40:10 -- common/autotest_common.sh@1187 -- # return 0 00:21:06.482 07:40:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:06.482 07:40:10 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:21:07.862 07:40:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:07.862 07:40:11 -- common/autotest_common.sh@1177 -- # local i=0 00:21:07.862 07:40:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:07.862 07:40:11 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:07.862 07:40:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:09.767 07:40:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:09.767 07:40:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:09.767 07:40:13 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:21:09.767 07:40:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:09.767 07:40:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:09.767 07:40:13 -- common/autotest_common.sh@1187 -- # return 0 00:21:09.767 07:40:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:09.767 07:40:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:21:11.144 07:40:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:11.144 07:40:14 -- common/autotest_common.sh@1177 -- # local i=0 00:21:11.144 07:40:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:11.144 07:40:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:11.144 07:40:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:13.048 07:40:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:13.048 07:40:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:13.048 07:40:16 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:21:13.048 07:40:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:13.048 07:40:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:13.048 07:40:16 -- common/autotest_common.sh@1187 -- # return 0 00:21:13.048 07:40:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.048 07:40:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:21:14.428 07:40:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:14.428 07:40:18 -- common/autotest_common.sh@1177 -- # local i=0 00:21:14.428 07:40:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:14.428 07:40:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:14.428 07:40:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:16.337 07:40:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:16.337 07:40:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:16.337 07:40:20 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:21:16.337 07:40:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:16.337 07:40:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:16.337 07:40:20 -- common/autotest_common.sh@1187 -- # return 0 00:21:16.337 07:40:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:16.337 07:40:20 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:21:17.713 07:40:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:17.713 07:40:21 -- common/autotest_common.sh@1177 -- # local i=0 00:21:17.713 07:40:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:17.713 07:40:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:17.713 07:40:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:19.618 07:40:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:19.618 07:40:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:19.618 07:40:23 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:21:19.618 07:40:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:19.618 07:40:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:19.618 07:40:23 -- common/autotest_common.sh@1187 -- # return 0 00:21:19.618 07:40:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:19.618 07:40:23 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:21:20.999 07:40:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:20.999 07:40:24 -- common/autotest_common.sh@1177 -- # local i=0 00:21:20.999 07:40:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:20.999 07:40:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:20.999 07:40:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:22.907 07:40:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:22.907 07:40:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:22.907 07:40:26 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:21:22.907 07:40:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:22.907 07:40:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:22.907 07:40:26 -- common/autotest_common.sh@1187 -- # return 0 00:21:22.907 07:40:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.907 07:40:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:21:24.291 07:40:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:24.291 07:40:27 -- common/autotest_common.sh@1177 -- # local i=0 00:21:24.291 07:40:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:24.291 07:40:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:24.291 07:40:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:26.198 07:40:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:26.198 07:40:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:26.198 07:40:29 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:21:26.198 07:40:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:26.198 07:40:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:26.198 07:40:29 -- common/autotest_common.sh@1187 -- # return 0 00:21:26.198 07:40:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:26.198 07:40:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:21:27.577 07:40:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:27.577 07:40:31 -- common/autotest_common.sh@1177 -- # local i=0 00:21:27.577 07:40:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:27.577 07:40:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:27.577 07:40:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:29.580 07:40:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:29.580 07:40:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:29.580 07:40:33 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:21:29.580 07:40:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:29.580 07:40:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:29.580 07:40:33 -- common/autotest_common.sh@1187 -- # return 0 00:21:29.580 07:40:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:29.580 07:40:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:21:31.023 07:40:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:31.023 07:40:34 -- common/autotest_common.sh@1177 -- # local i=0 00:21:31.023 07:40:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:31.023 07:40:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:31.023 07:40:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:32.930 07:40:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:32.930 07:40:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:32.930 07:40:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:21:32.930 07:40:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:32.930 07:40:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:32.930 07:40:36 -- common/autotest_common.sh@1187 -- # return 0 00:21:32.930 07:40:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:32.930 07:40:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:21:34.308 07:40:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:34.308 07:40:38 -- common/autotest_common.sh@1177 -- # local i=0 00:21:34.308 07:40:38 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:34.308 07:40:38 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:34.308 07:40:38 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:36.219 07:40:40 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:36.219 07:40:40 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:36.219 07:40:40 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:21:36.478 07:40:40 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:36.478 07:40:40 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:36.478 07:40:40 -- common/autotest_common.sh@1187 -- # return 0 00:21:36.478 07:40:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:36.478 07:40:40 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:21:37.858 07:40:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:37.858 07:40:41 -- common/autotest_common.sh@1177 -- # local i=0 00:21:37.858 07:40:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:37.858 07:40:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:37.858 07:40:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:39.765 07:40:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:39.765 07:40:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:39.765 07:40:43 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:21:39.765 07:40:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:39.765 07:40:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:39.765 07:40:43 -- common/autotest_common.sh@1187 -- # return 0 00:21:39.765 07:40:43 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:39.765 [global] 00:21:39.765 thread=1 00:21:39.765 invalidate=1 00:21:39.765 rw=read 00:21:39.765 time_based=1 00:21:39.765 runtime=10 00:21:39.765 ioengine=libaio 00:21:39.765 direct=1 00:21:39.765 bs=262144 00:21:39.765 iodepth=64 00:21:39.765 norandommap=1 00:21:39.765 numjobs=1 00:21:39.765 00:21:39.765 [job0] 00:21:39.765 filename=/dev/nvme0n1 00:21:39.765 [job1] 00:21:39.765 filename=/dev/nvme10n1 00:21:39.765 [job2] 00:21:39.765 filename=/dev/nvme1n1 00:21:39.765 [job3] 00:21:39.765 filename=/dev/nvme2n1 00:21:39.765 [job4] 00:21:39.765 filename=/dev/nvme3n1 00:21:39.765 [job5] 00:21:39.765 filename=/dev/nvme4n1 00:21:39.765 [job6] 00:21:39.765 filename=/dev/nvme5n1 00:21:39.765 [job7] 00:21:39.765 filename=/dev/nvme6n1 00:21:39.765 [job8] 00:21:39.765 filename=/dev/nvme7n1 00:21:39.765 [job9] 00:21:39.765 filename=/dev/nvme8n1 00:21:39.765 [job10] 00:21:39.765 filename=/dev/nvme9n1 00:21:40.055 Could not set queue depth (nvme0n1) 00:21:40.055 Could not set queue depth (nvme10n1) 00:21:40.055 Could not set queue depth (nvme1n1) 00:21:40.055 Could not set queue depth (nvme2n1) 00:21:40.055 Could not set queue depth (nvme3n1) 00:21:40.055 Could not set queue depth (nvme4n1) 00:21:40.055 Could not set queue depth (nvme5n1) 00:21:40.055 Could not set queue depth (nvme6n1) 00:21:40.055 Could not set queue depth (nvme7n1) 00:21:40.055 Could not set queue depth (nvme8n1) 00:21:40.055 Could not set queue depth (nvme9n1) 00:21:40.314 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:40.314 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:40.314 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:40.314 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:40.314 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:40.314 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:40.314 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:40.314 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:40.314 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:40.314 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:40.314 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:40.314 fio-3.35 00:21:40.314 Starting 11 threads 00:21:52.509 00:21:52.509 job0: (groupid=0, jobs=1): err= 0: pid=4179524: Mon Oct 7 07:40:54 2024 00:21:52.509 read: IOPS=687, BW=172MiB/s (180MB/s)(1732MiB/10072msec) 00:21:52.509 slat (usec): min=10, max=83946, avg=1037.61, stdev=3688.23 00:21:52.509 clat (msec): min=3, max=206, avg=91.83, stdev=35.96 00:21:52.509 lat (msec): min=3, max=206, avg=92.87, stdev=36.43 00:21:52.509 clat percentiles (msec): 00:21:52.509 | 1.00th=[ 9], 5.00th=[ 16], 10.00th=[ 33], 20.00th=[ 70], 00:21:52.509 | 30.00th=[ 80], 40.00th=[ 87], 50.00th=[ 95], 60.00th=[ 104], 00:21:52.509 | 70.00th=[ 112], 80.00th=[ 123], 90.00th=[ 134], 95.00th=[ 144], 00:21:52.509 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 186], 99.95th=[ 197], 00:21:52.509 | 99.99th=[ 207] 00:21:52.509 bw ( KiB/s): min=131072, max=339968, per=7.94%, avg=175769.60, stdev=47772.10, samples=20 00:21:52.509 iops : min= 512, max= 1328, avg=686.60, stdev=186.61, samples=20 00:21:52.509 lat (msec) : 4=0.03%, 10=1.88%, 20=5.30%, 50=5.54%, 100=43.80% 00:21:52.509 lat (msec) : 250=43.46% 00:21:52.509 cpu : usr=0.22%, sys=2.55%, ctx=1890, majf=0, minf=4097 00:21:52.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:52.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:52.509 issued rwts: total=6929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.509 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:52.509 job1: (groupid=0, jobs=1): err= 0: pid=4179525: Mon Oct 7 07:40:54 2024 00:21:52.509 read: IOPS=871, BW=218MiB/s (229MB/s)(2197MiB/10077msec) 00:21:52.509 slat (usec): min=9, max=44121, avg=1046.73, stdev=3190.90 00:21:52.509 clat (msec): min=5, max=207, avg=72.26, stdev=38.47 00:21:52.509 lat (msec): min=6, max=207, avg=73.31, stdev=38.98 00:21:52.509 clat percentiles (msec): 00:21:52.509 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 32], 00:21:52.509 | 30.00th=[ 40], 40.00th=[ 51], 50.00th=[ 69], 60.00th=[ 86], 00:21:52.509 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 123], 95.00th=[ 140], 00:21:52.509 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 190], 99.95th=[ 192], 00:21:52.509 | 99.99th=[ 207] 00:21:52.509 bw ( KiB/s): min=98816, max=493056, per=10.08%, avg=223283.20, stdev=99812.00, samples=20 00:21:52.509 iops : min= 386, max= 1926, avg=872.20, stdev=389.89, samples=20 00:21:52.509 lat (msec) : 10=0.06%, 20=0.76%, 50=38.83%, 100=33.38%, 250=26.96% 00:21:52.509 cpu : usr=0.40%, sys=3.22%, ctx=1711, majf=0, minf=3722 00:21:52.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:52.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:52.509 issued rwts: total=8786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.509 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:52.509 job2: (groupid=0, jobs=1): err= 0: pid=4179528: Mon Oct 7 07:40:54 2024 00:21:52.509 read: IOPS=767, BW=192MiB/s (201MB/s)(1932MiB/10073msec) 00:21:52.509 slat (usec): min=9, max=98623, avg=832.03, stdev=3544.66 00:21:52.509 clat (msec): min=2, max=218, avg=82.48, stdev=40.78 00:21:52.509 lat (msec): min=2, max=240, avg=83.31, stdev=41.31 00:21:52.509 clat percentiles (msec): 00:21:52.509 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 42], 00:21:52.509 | 30.00th=[ 57], 40.00th=[ 74], 50.00th=[ 88], 60.00th=[ 97], 00:21:52.509 | 70.00th=[ 107], 80.00th=[ 120], 90.00th=[ 136], 95.00th=[ 144], 00:21:52.509 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 194], 99.95th=[ 205], 00:21:52.509 | 99.99th=[ 220] 00:21:52.509 bw ( KiB/s): min=121856, max=329728, per=8.86%, avg=196240.40, stdev=60002.97, samples=20 00:21:52.509 iops : min= 476, max= 1288, avg=766.55, stdev=234.39, samples=20 00:21:52.509 lat (msec) : 4=0.28%, 10=2.58%, 20=4.54%, 50=18.45%, 100=37.90% 00:21:52.509 lat (msec) : 250=36.24% 00:21:52.509 cpu : usr=0.23%, sys=3.01%, ctx=2064, majf=0, minf=4097 00:21:52.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:52.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:52.509 issued rwts: total=7728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.509 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:52.509 job3: (groupid=0, jobs=1): err= 0: pid=4179529: Mon Oct 7 07:40:54 2024 00:21:52.509 read: IOPS=793, BW=198MiB/s (208MB/s)(2000MiB/10077msec) 00:21:52.509 slat (usec): min=10, max=79704, avg=875.54, stdev=3439.69 00:21:52.509 clat (usec): min=1046, max=210946, avg=79617.75, stdev=37939.54 00:21:52.509 lat (usec): min=1078, max=210990, avg=80493.29, stdev=38375.27 00:21:52.509 clat percentiles (msec): 00:21:52.509 | 1.00th=[ 3], 5.00th=[ 12], 10.00th=[ 23], 20.00th=[ 46], 00:21:52.509 | 30.00th=[ 58], 40.00th=[ 71], 50.00th=[ 84], 60.00th=[ 95], 00:21:52.509 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 125], 95.00th=[ 140], 00:21:52.509 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 184], 99.95th=[ 192], 00:21:52.509 | 99.99th=[ 211] 00:21:52.509 bw ( KiB/s): min=141312, max=344064, per=9.17%, avg=203187.20, stdev=52855.11, samples=20 00:21:52.509 iops : min= 552, max= 1344, avg=793.70, stdev=206.47, samples=20 00:21:52.509 lat (msec) : 2=0.62%, 4=1.16%, 10=2.21%, 20=5.19%, 50=13.29% 00:21:52.509 lat (msec) : 100=43.48%, 250=34.05% 00:21:52.509 cpu : usr=0.22%, sys=3.28%, ctx=2008, majf=0, minf=4097 00:21:52.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:52.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:52.509 issued rwts: total=8000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.509 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:52.509 job4: (groupid=0, jobs=1): err= 0: pid=4179530: Mon Oct 7 07:40:54 2024 00:21:52.509 read: IOPS=761, BW=190MiB/s (200MB/s)(1916MiB/10070msec) 00:21:52.509 slat (usec): min=8, max=111042, avg=899.87, stdev=3533.52 00:21:52.509 clat (usec): min=1671, max=229111, avg=83075.09, stdev=40932.92 00:21:52.509 lat (usec): min=1718, max=229148, avg=83974.96, stdev=41515.10 00:21:52.509 clat percentiles (msec): 00:21:52.509 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 20], 20.00th=[ 45], 00:21:52.509 | 30.00th=[ 67], 40.00th=[ 79], 50.00th=[ 87], 60.00th=[ 96], 00:21:52.509 | 70.00th=[ 109], 80.00th=[ 120], 90.00th=[ 134], 95.00th=[ 144], 00:21:52.509 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 197], 99.95th=[ 197], 00:21:52.509 | 99.99th=[ 230] 00:21:52.509 bw ( KiB/s): min=123392, max=318464, per=8.79%, avg=194611.20, stdev=55574.23, samples=20 00:21:52.509 iops : min= 482, max= 1244, avg=760.20, stdev=217.09, samples=20 00:21:52.509 lat (msec) : 2=0.08%, 4=0.80%, 10=4.44%, 20=5.08%, 50=11.91% 00:21:52.509 lat (msec) : 100=41.50%, 250=36.20% 00:21:52.509 cpu : usr=0.29%, sys=2.92%, ctx=2033, majf=0, minf=4097 00:21:52.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:52.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:52.509 issued rwts: total=7665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.509 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:52.509 job5: (groupid=0, jobs=1): err= 0: pid=4179531: Mon Oct 7 07:40:54 2024 00:21:52.509 read: IOPS=767, BW=192MiB/s (201MB/s)(1933MiB/10075msec) 00:21:52.509 slat (usec): min=10, max=65768, avg=797.56, stdev=3247.27 00:21:52.509 clat (usec): min=1313, max=190620, avg=82490.56, stdev=38978.42 00:21:52.509 lat (usec): min=1340, max=199119, avg=83288.12, stdev=39369.31 00:21:52.509 clat percentiles (msec): 00:21:52.509 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 31], 20.00th=[ 48], 00:21:52.509 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 82], 60.00th=[ 92], 00:21:52.509 | 70.00th=[ 106], 80.00th=[ 120], 90.00th=[ 138], 95.00th=[ 146], 00:21:52.509 | 99.00th=[ 165], 99.50th=[ 171], 99.90th=[ 182], 99.95th=[ 184], 00:21:52.509 | 99.99th=[ 190] 00:21:52.509 bw ( KiB/s): min=125952, max=368640, per=8.86%, avg=196275.20, stdev=53584.37, samples=20 00:21:52.509 iops : min= 492, max= 1440, avg=766.70, stdev=209.31, samples=20 00:21:52.509 lat (msec) : 2=0.19%, 4=0.39%, 10=1.73%, 20=3.38%, 50=15.55% 00:21:52.509 lat (msec) : 100=45.41%, 250=33.35% 00:21:52.509 cpu : usr=0.26%, sys=3.10%, ctx=2117, majf=0, minf=4097 00:21:52.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:52.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:52.509 issued rwts: total=7730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.509 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:52.509 job6: (groupid=0, jobs=1): err= 0: pid=4179532: Mon Oct 7 07:40:54 2024 00:21:52.509 read: IOPS=694, BW=174MiB/s (182MB/s)(1749MiB/10078msec) 00:21:52.509 slat (usec): min=11, max=73355, avg=1044.38, stdev=3644.84 00:21:52.509 clat (usec): min=1387, max=212617, avg=91003.55, stdev=34776.08 00:21:52.509 lat (usec): min=1418, max=212659, avg=92047.92, stdev=35171.40 00:21:52.509 clat percentiles (msec): 00:21:52.509 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 47], 20.00th=[ 65], 00:21:52.509 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 95], 60.00th=[ 103], 00:21:52.509 | 70.00th=[ 111], 80.00th=[ 121], 90.00th=[ 134], 95.00th=[ 146], 00:21:52.510 | 99.00th=[ 165], 99.50th=[ 171], 99.90th=[ 186], 99.95th=[ 186], 00:21:52.510 | 99.99th=[ 213] 00:21:52.510 bw ( KiB/s): min=120561, max=261632, per=8.01%, avg=177445.65, stdev=41825.66, samples=20 00:21:52.510 iops : min= 470, max= 1022, avg=693.10, stdev=163.45, samples=20 00:21:52.510 lat (msec) : 2=0.17%, 4=0.20%, 10=1.72%, 20=2.10%, 50=7.01% 00:21:52.510 lat (msec) : 100=46.03%, 250=42.77% 00:21:52.510 cpu : usr=0.32%, sys=2.84%, ctx=1880, majf=0, minf=4097 00:21:52.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:52.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:52.510 issued rwts: total=6995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.510 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:52.510 job7: (groupid=0, jobs=1): err= 0: pid=4179533: Mon Oct 7 07:40:54 2024 00:21:52.510 read: IOPS=918, BW=230MiB/s (241MB/s)(2311MiB/10066msec) 00:21:52.510 slat (usec): min=10, max=94883, avg=691.23, stdev=2873.46 00:21:52.510 clat (usec): min=1454, max=194550, avg=68937.23, stdev=39789.01 00:21:52.510 lat (usec): min=1491, max=197159, avg=69628.46, stdev=40155.56 00:21:52.510 clat percentiles (msec): 00:21:52.510 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 26], 20.00th=[ 33], 00:21:52.510 | 30.00th=[ 40], 40.00th=[ 51], 50.00th=[ 65], 60.00th=[ 77], 00:21:52.510 | 70.00th=[ 90], 80.00th=[ 105], 90.00th=[ 127], 95.00th=[ 144], 00:21:52.510 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 178], 99.95th=[ 178], 00:21:52.510 | 99.99th=[ 194] 00:21:52.510 bw ( KiB/s): min=126464, max=441344, per=10.61%, avg=234982.40, stdev=80897.96, samples=20 00:21:52.510 iops : min= 494, max= 1724, avg=917.90, stdev=316.01, samples=20 00:21:52.510 lat (msec) : 2=0.06%, 4=0.78%, 10=2.92%, 20=4.44%, 50=31.06% 00:21:52.510 lat (msec) : 100=37.26%, 250=23.48% 00:21:52.510 cpu : usr=0.32%, sys=3.29%, ctx=2375, majf=0, minf=4097 00:21:52.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:52.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:52.510 issued rwts: total=9243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.510 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:52.510 job8: (groupid=0, jobs=1): err= 0: pid=4179534: Mon Oct 7 07:40:54 2024 00:21:52.510 read: IOPS=765, BW=191MiB/s (201MB/s)(1928MiB/10069msec) 00:21:52.510 slat (usec): min=9, max=72503, avg=1052.87, stdev=3353.96 00:21:52.510 clat (msec): min=2, max=191, avg=82.41, stdev=36.97 00:21:52.510 lat (msec): min=2, max=191, avg=83.46, stdev=37.38 00:21:52.510 clat percentiles (msec): 00:21:52.510 | 1.00th=[ 14], 5.00th=[ 27], 10.00th=[ 32], 20.00th=[ 46], 00:21:52.510 | 30.00th=[ 57], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 94], 00:21:52.510 | 70.00th=[ 105], 80.00th=[ 115], 90.00th=[ 133], 95.00th=[ 144], 00:21:52.510 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 184], 00:21:52.510 | 99.99th=[ 192] 00:21:52.510 bw ( KiB/s): min=105984, max=387584, per=8.84%, avg=195801.85, stdev=74575.08, samples=20 00:21:52.510 iops : min= 414, max= 1514, avg=764.85, stdev=291.31, samples=20 00:21:52.510 lat (msec) : 4=0.10%, 10=0.67%, 20=0.89%, 50=22.96%, 100=40.88% 00:21:52.510 lat (msec) : 250=34.48% 00:21:52.510 cpu : usr=0.28%, sys=3.13%, ctx=1691, majf=0, minf=4097 00:21:52.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:52.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:52.510 issued rwts: total=7712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.510 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:52.510 job9: (groupid=0, jobs=1): err= 0: pid=4179535: Mon Oct 7 07:40:54 2024 00:21:52.510 read: IOPS=794, BW=199MiB/s (208MB/s)(2001MiB/10069msec) 00:21:52.510 slat (usec): min=9, max=88875, avg=997.51, stdev=3400.06 00:21:52.510 clat (msec): min=2, max=194, avg=79.43, stdev=39.38 00:21:52.510 lat (msec): min=2, max=240, avg=80.42, stdev=39.94 00:21:52.510 clat percentiles (msec): 00:21:52.510 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 28], 20.00th=[ 36], 00:21:52.510 | 30.00th=[ 56], 40.00th=[ 70], 50.00th=[ 81], 60.00th=[ 93], 00:21:52.510 | 70.00th=[ 104], 80.00th=[ 115], 90.00th=[ 132], 95.00th=[ 142], 00:21:52.510 | 99.00th=[ 159], 99.50th=[ 171], 99.90th=[ 194], 99.95th=[ 194], 00:21:52.510 | 99.99th=[ 194] 00:21:52.510 bw ( KiB/s): min=112640, max=374784, per=9.18%, avg=203289.60, stdev=72228.30, samples=20 00:21:52.510 iops : min= 440, max= 1464, avg=794.10, stdev=282.14, samples=20 00:21:52.510 lat (msec) : 4=0.12%, 10=2.05%, 20=3.52%, 50=21.11%, 100=40.30% 00:21:52.510 lat (msec) : 250=32.88% 00:21:52.510 cpu : usr=0.30%, sys=3.33%, ctx=1758, majf=0, minf=4097 00:21:52.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:52.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:52.510 issued rwts: total=8004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.510 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:52.510 job10: (groupid=0, jobs=1): err= 0: pid=4179536: Mon Oct 7 07:40:54 2024 00:21:52.510 read: IOPS=833, BW=208MiB/s (218MB/s)(2099MiB/10078msec) 00:21:52.510 slat (usec): min=7, max=145286, avg=771.15, stdev=3481.83 00:21:52.510 clat (usec): min=1039, max=220994, avg=75967.80, stdev=39725.52 00:21:52.510 lat (usec): min=1078, max=243745, avg=76738.94, stdev=40057.12 00:21:52.510 clat percentiles (msec): 00:21:52.510 | 1.00th=[ 7], 5.00th=[ 14], 10.00th=[ 28], 20.00th=[ 42], 00:21:52.510 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 73], 60.00th=[ 86], 00:21:52.510 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 122], 95.00th=[ 138], 00:21:52.510 | 99.00th=[ 215], 99.50th=[ 218], 99.90th=[ 220], 99.95th=[ 220], 00:21:52.510 | 99.99th=[ 222] 00:21:52.510 bw ( KiB/s): min=151040, max=380416, per=9.63%, avg=213299.20, stdev=63451.13, samples=20 00:21:52.510 iops : min= 590, max= 1486, avg=833.20, stdev=247.86, samples=20 00:21:52.510 lat (msec) : 2=0.15%, 4=0.23%, 10=2.32%, 20=4.60%, 50=17.62% 00:21:52.510 lat (msec) : 100=49.17%, 250=25.91% 00:21:52.510 cpu : usr=0.30%, sys=2.99%, ctx=2210, majf=0, minf=4098 00:21:52.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:52.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:52.510 issued rwts: total=8395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.510 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:52.510 00:21:52.510 Run status group 0 (all jobs): 00:21:52.510 READ: bw=2163MiB/s (2268MB/s), 172MiB/s-230MiB/s (180MB/s-241MB/s), io=21.3GiB (22.9GB), run=10066-10078msec 00:21:52.510 00:21:52.510 Disk stats (read/write): 00:21:52.510 nvme0n1: ios=13621/0, merge=0/0, ticks=1236261/0, in_queue=1236261, util=97.24% 00:21:52.510 nvme10n1: ios=17379/0, merge=0/0, ticks=1230191/0, in_queue=1230191, util=97.48% 00:21:52.510 nvme1n1: ios=15273/0, merge=0/0, ticks=1235122/0, in_queue=1235122, util=97.72% 00:21:52.510 nvme2n1: ios=15814/0, merge=0/0, ticks=1232594/0, in_queue=1232594, util=97.83% 00:21:52.510 nvme3n1: ios=15103/0, merge=0/0, ticks=1234356/0, in_queue=1234356, util=97.98% 00:21:52.510 nvme4n1: ios=15219/0, merge=0/0, ticks=1238645/0, in_queue=1238645, util=98.31% 00:21:52.510 nvme5n1: ios=13800/0, merge=0/0, ticks=1232636/0, in_queue=1232636, util=98.42% 00:21:52.510 nvme6n1: ios=18292/0, merge=0/0, ticks=1236841/0, in_queue=1236841, util=98.55% 00:21:52.510 nvme7n1: ios=15221/0, merge=0/0, ticks=1230001/0, in_queue=1230001, util=98.93% 00:21:52.510 nvme8n1: ios=15775/0, merge=0/0, ticks=1231151/0, in_queue=1231151, util=99.10% 00:21:52.510 nvme9n1: ios=16555/0, merge=0/0, ticks=1238896/0, in_queue=1238896, util=99.24% 00:21:52.510 07:40:54 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:52.510 [global] 00:21:52.510 thread=1 00:21:52.510 invalidate=1 00:21:52.510 rw=randwrite 00:21:52.510 time_based=1 00:21:52.510 runtime=10 00:21:52.510 ioengine=libaio 00:21:52.510 direct=1 00:21:52.510 bs=262144 00:21:52.510 iodepth=64 00:21:52.510 norandommap=1 00:21:52.510 numjobs=1 00:21:52.510 00:21:52.510 [job0] 00:21:52.510 filename=/dev/nvme0n1 00:21:52.510 [job1] 00:21:52.510 filename=/dev/nvme10n1 00:21:52.510 [job2] 00:21:52.510 filename=/dev/nvme1n1 00:21:52.510 [job3] 00:21:52.510 filename=/dev/nvme2n1 00:21:52.510 [job4] 00:21:52.510 filename=/dev/nvme3n1 00:21:52.510 [job5] 00:21:52.510 filename=/dev/nvme4n1 00:21:52.510 [job6] 00:21:52.510 filename=/dev/nvme5n1 00:21:52.510 [job7] 00:21:52.510 filename=/dev/nvme6n1 00:21:52.510 [job8] 00:21:52.510 filename=/dev/nvme7n1 00:21:52.510 [job9] 00:21:52.510 filename=/dev/nvme8n1 00:21:52.510 [job10] 00:21:52.510 filename=/dev/nvme9n1 00:21:52.510 Could not set queue depth (nvme0n1) 00:21:52.510 Could not set queue depth (nvme10n1) 00:21:52.510 Could not set queue depth (nvme1n1) 00:21:52.510 Could not set queue depth (nvme2n1) 00:21:52.510 Could not set queue depth (nvme3n1) 00:21:52.510 Could not set queue depth (nvme4n1) 00:21:52.510 Could not set queue depth (nvme5n1) 00:21:52.510 Could not set queue depth (nvme6n1) 00:21:52.510 Could not set queue depth (nvme7n1) 00:21:52.510 Could not set queue depth (nvme8n1) 00:21:52.510 Could not set queue depth (nvme9n1) 00:21:52.510 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:52.510 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:52.510 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:52.510 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:52.510 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:52.511 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:52.511 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:52.511 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:52.511 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:52.511 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:52.511 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:52.511 fio-3.35 00:21:52.511 Starting 11 threads 00:22:02.488 00:22:02.488 job0: (groupid=0, jobs=1): err= 0: pid=4181057: Mon Oct 7 07:41:05 2024 00:22:02.488 write: IOPS=614, BW=154MiB/s (161MB/s)(1546MiB/10064msec); 0 zone resets 00:22:02.488 slat (usec): min=15, max=52703, avg=1404.02, stdev=3559.40 00:22:02.488 clat (usec): min=1762, max=263654, avg=102741.86, stdev=67589.89 00:22:02.488 lat (msec): min=2, max=265, avg=104.15, stdev=68.53 00:22:02.488 clat percentiles (msec): 00:22:02.488 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 38], 20.00th=[ 43], 00:22:02.488 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 79], 60.00th=[ 97], 00:22:02.488 | 70.00th=[ 123], 80.00th=[ 167], 90.00th=[ 222], 95.00th=[ 243], 00:22:02.488 | 99.00th=[ 257], 99.50th=[ 259], 99.90th=[ 264], 99.95th=[ 264], 00:22:02.488 | 99.99th=[ 264] 00:22:02.488 bw ( KiB/s): min=67584, max=377856, per=8.83%, avg=156659.60, stdev=91593.54, samples=20 00:22:02.488 iops : min= 264, max= 1476, avg=611.95, stdev=357.79, samples=20 00:22:02.488 lat (msec) : 2=0.02%, 4=0.32%, 10=2.22%, 20=2.83%, 50=19.56% 00:22:02.488 lat (msec) : 100=37.38%, 250=35.18%, 500=2.49% 00:22:02.488 cpu : usr=1.64%, sys=2.02%, ctx=2421, majf=0, minf=1 00:22:02.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:02.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:02.488 issued rwts: total=0,6182,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.488 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:02.488 job1: (groupid=0, jobs=1): err= 0: pid=4181058: Mon Oct 7 07:41:05 2024 00:22:02.488 write: IOPS=686, BW=172MiB/s (180MB/s)(1733MiB/10104msec); 0 zone resets 00:22:02.488 slat (usec): min=21, max=103183, avg=1259.47, stdev=3386.50 00:22:02.488 clat (msec): min=2, max=302, avg=91.99, stdev=55.50 00:22:02.488 lat (msec): min=2, max=302, avg=93.25, stdev=56.21 00:22:02.488 clat percentiles (msec): 00:22:02.488 | 1.00th=[ 16], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 43], 00:22:02.488 | 30.00th=[ 45], 40.00th=[ 62], 50.00th=[ 78], 60.00th=[ 105], 00:22:02.488 | 70.00th=[ 112], 80.00th=[ 133], 90.00th=[ 167], 95.00th=[ 205], 00:22:02.488 | 99.00th=[ 257], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 296], 00:22:02.488 | 99.99th=[ 305] 00:22:02.488 bw ( KiB/s): min=71680, max=376832, per=9.91%, avg=175846.40, stdev=93089.56, samples=20 00:22:02.488 iops : min= 280, max= 1472, avg=686.90, stdev=363.63, samples=20 00:22:02.488 lat (msec) : 4=0.07%, 10=0.59%, 20=0.72%, 50=32.96%, 100=23.51% 00:22:02.488 lat (msec) : 250=40.72%, 500=1.41% 00:22:02.488 cpu : usr=1.34%, sys=2.22%, ctx=2509, majf=0, minf=1 00:22:02.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:02.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:02.488 issued rwts: total=0,6932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.488 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:02.488 job2: (groupid=0, jobs=1): err= 0: pid=4181059: Mon Oct 7 07:41:05 2024 00:22:02.488 write: IOPS=653, BW=163MiB/s (171MB/s)(1650MiB/10098msec); 0 zone resets 00:22:02.488 slat (usec): min=22, max=52734, avg=1346.33, stdev=3124.10 00:22:02.488 clat (msec): min=3, max=270, avg=96.52, stdev=50.50 00:22:02.488 lat (msec): min=3, max=270, avg=97.87, stdev=51.22 00:22:02.488 clat percentiles (msec): 00:22:02.488 | 1.00th=[ 17], 5.00th=[ 34], 10.00th=[ 42], 20.00th=[ 59], 00:22:02.488 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 103], 00:22:02.488 | 70.00th=[ 109], 80.00th=[ 128], 90.00th=[ 163], 95.00th=[ 213], 00:22:02.488 | 99.00th=[ 251], 99.50th=[ 259], 99.90th=[ 268], 99.95th=[ 268], 00:22:02.488 | 99.99th=[ 271] 00:22:02.488 bw ( KiB/s): min=69632, max=383488, per=9.44%, avg=167389.85, stdev=74481.14, samples=20 00:22:02.488 iops : min= 272, max= 1498, avg=653.85, stdev=290.94, samples=20 00:22:02.488 lat (msec) : 4=0.03%, 10=0.47%, 20=0.83%, 50=17.57%, 100=38.46% 00:22:02.488 lat (msec) : 250=41.55%, 500=1.08% 00:22:02.488 cpu : usr=2.03%, sys=1.80%, ctx=2504, majf=0, minf=1 00:22:02.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:02.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:02.488 issued rwts: total=0,6601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.488 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:02.488 job3: (groupid=0, jobs=1): err= 0: pid=4181061: Mon Oct 7 07:41:05 2024 00:22:02.488 write: IOPS=753, BW=188MiB/s (198MB/s)(1904MiB/10104msec); 0 zone resets 00:22:02.488 slat (usec): min=21, max=118862, avg=1010.37, stdev=3053.07 00:22:02.488 clat (msec): min=2, max=288, avg=83.84, stdev=43.70 00:22:02.488 lat (msec): min=2, max=288, avg=84.85, stdev=44.01 00:22:02.488 clat percentiles (msec): 00:22:02.488 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 40], 20.00th=[ 46], 00:22:02.488 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 80], 00:22:02.488 | 70.00th=[ 103], 80.00th=[ 111], 90.00th=[ 140], 95.00th=[ 174], 00:22:02.488 | 99.00th=[ 228], 99.50th=[ 247], 99.90th=[ 279], 99.95th=[ 284], 00:22:02.488 | 99.99th=[ 288] 00:22:02.488 bw ( KiB/s): min=90624, max=335360, per=10.90%, avg=193356.80, stdev=54863.72, samples=20 00:22:02.488 iops : min= 354, max= 1310, avg=755.30, stdev=214.31, samples=20 00:22:02.488 lat (msec) : 4=0.22%, 10=2.07%, 20=2.53%, 50=17.17%, 100=46.86% 00:22:02.488 lat (msec) : 250=30.67%, 500=0.46% 00:22:02.488 cpu : usr=1.99%, sys=2.15%, ctx=3339, majf=0, minf=1 00:22:02.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:02.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:02.488 issued rwts: total=0,7616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.488 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:02.488 job4: (groupid=0, jobs=1): err= 0: pid=4181062: Mon Oct 7 07:41:05 2024 00:22:02.488 write: IOPS=656, BW=164MiB/s (172MB/s)(1657MiB/10102msec); 0 zone resets 00:22:02.488 slat (usec): min=22, max=113835, avg=1335.16, stdev=3303.12 00:22:02.488 clat (usec): min=1945, max=299107, avg=96142.88, stdev=56243.16 00:22:02.488 lat (msec): min=2, max=299, avg=97.48, stdev=56.92 00:22:02.488 clat percentiles (msec): 00:22:02.488 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 41], 20.00th=[ 43], 00:22:02.488 | 30.00th=[ 45], 40.00th=[ 71], 50.00th=[ 101], 60.00th=[ 110], 00:22:02.488 | 70.00th=[ 117], 80.00th=[ 146], 90.00th=[ 182], 95.00th=[ 201], 00:22:02.489 | 99.00th=[ 228], 99.50th=[ 241], 99.90th=[ 279], 99.95th=[ 288], 00:22:02.489 | 99.99th=[ 300] 00:22:02.489 bw ( KiB/s): min=83968, max=387584, per=9.48%, avg=168076.00, stdev=94352.12, samples=20 00:22:02.489 iops : min= 328, max= 1514, avg=656.50, stdev=368.59, samples=20 00:22:02.489 lat (msec) : 2=0.02%, 4=0.15%, 10=1.98%, 20=2.41%, 50=28.80% 00:22:02.489 lat (msec) : 100=16.66%, 250=49.61%, 500=0.38% 00:22:02.489 cpu : usr=2.08%, sys=1.88%, ctx=2329, majf=0, minf=1 00:22:02.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:02.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:02.489 issued rwts: total=0,6628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:02.489 job5: (groupid=0, jobs=1): err= 0: pid=4181063: Mon Oct 7 07:41:05 2024 00:22:02.489 write: IOPS=751, BW=188MiB/s (197MB/s)(1897MiB/10095msec); 0 zone resets 00:22:02.489 slat (usec): min=19, max=74294, avg=981.19, stdev=2738.65 00:22:02.489 clat (msec): min=2, max=336, avg=84.12, stdev=58.08 00:22:02.489 lat (msec): min=2, max=336, avg=85.10, stdev=58.62 00:22:02.489 clat percentiles (msec): 00:22:02.489 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 34], 20.00th=[ 42], 00:22:02.489 | 30.00th=[ 44], 40.00th=[ 50], 50.00th=[ 68], 60.00th=[ 89], 00:22:02.489 | 70.00th=[ 104], 80.00th=[ 122], 90.00th=[ 155], 95.00th=[ 215], 00:22:02.489 | 99.00th=[ 279], 99.50th=[ 292], 99.90th=[ 317], 99.95th=[ 330], 00:22:02.489 | 99.99th=[ 338] 00:22:02.489 bw ( KiB/s): min=65536, max=353792, per=10.86%, avg=192680.10, stdev=83729.97, samples=20 00:22:02.489 iops : min= 256, max= 1382, avg=752.65, stdev=327.07, samples=20 00:22:02.489 lat (msec) : 4=0.20%, 10=1.38%, 20=4.30%, 50=35.13%, 100=24.84% 00:22:02.489 lat (msec) : 250=31.81%, 500=2.35% 00:22:02.489 cpu : usr=1.71%, sys=2.48%, ctx=3372, majf=0, minf=2 00:22:02.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:02.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:02.489 issued rwts: total=0,7589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:02.489 job6: (groupid=0, jobs=1): err= 0: pid=4181075: Mon Oct 7 07:41:05 2024 00:22:02.489 write: IOPS=523, BW=131MiB/s (137MB/s)(1322MiB/10097msec); 0 zone resets 00:22:02.489 slat (usec): min=24, max=114537, avg=1498.87, stdev=4127.52 00:22:02.489 clat (msec): min=2, max=307, avg=120.63, stdev=62.72 00:22:02.489 lat (msec): min=3, max=307, avg=122.13, stdev=63.59 00:22:02.489 clat percentiles (msec): 00:22:02.489 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 41], 20.00th=[ 69], 00:22:02.489 | 30.00th=[ 88], 40.00th=[ 105], 50.00th=[ 116], 60.00th=[ 129], 00:22:02.489 | 70.00th=[ 138], 80.00th=[ 163], 90.00th=[ 218], 95.00th=[ 234], 00:22:02.489 | 99.00th=[ 275], 99.50th=[ 279], 99.90th=[ 305], 99.95th=[ 305], 00:22:02.489 | 99.99th=[ 309] 00:22:02.489 bw ( KiB/s): min=64000, max=212480, per=7.54%, avg=133775.05, stdev=44627.49, samples=20 00:22:02.489 iops : min= 250, max= 830, avg=522.55, stdev=174.32, samples=20 00:22:02.489 lat (msec) : 4=0.11%, 10=1.02%, 20=2.23%, 50=9.21%, 100=22.71% 00:22:02.489 lat (msec) : 250=60.91%, 500=3.80% 00:22:02.489 cpu : usr=1.29%, sys=1.73%, ctx=2539, majf=0, minf=1 00:22:02.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:02.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:02.489 issued rwts: total=0,5288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:02.489 job7: (groupid=0, jobs=1): err= 0: pid=4181076: Mon Oct 7 07:41:05 2024 00:22:02.489 write: IOPS=526, BW=132MiB/s (138MB/s)(1326MiB/10064msec); 0 zone resets 00:22:02.489 slat (usec): min=23, max=82939, avg=1580.99, stdev=3888.73 00:22:02.489 clat (msec): min=2, max=256, avg=119.86, stdev=61.10 00:22:02.489 lat (msec): min=3, max=263, avg=121.44, stdev=61.96 00:22:02.489 clat percentiles (msec): 00:22:02.489 | 1.00th=[ 10], 5.00th=[ 22], 10.00th=[ 45], 20.00th=[ 71], 00:22:02.489 | 30.00th=[ 85], 40.00th=[ 101], 50.00th=[ 107], 60.00th=[ 124], 00:22:02.489 | 70.00th=[ 142], 80.00th=[ 188], 90.00th=[ 213], 95.00th=[ 228], 00:22:02.489 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 255], 99.95th=[ 257], 00:22:02.489 | 99.99th=[ 257] 00:22:02.489 bw ( KiB/s): min=65536, max=224768, per=7.56%, avg=134118.40, stdev=50942.67, samples=20 00:22:02.489 iops : min= 256, max= 878, avg=523.90, stdev=198.99, samples=20 00:22:02.489 lat (msec) : 4=0.11%, 10=0.92%, 20=3.30%, 50=6.85%, 100=28.22% 00:22:02.489 lat (msec) : 250=60.00%, 500=0.60% 00:22:02.489 cpu : usr=1.30%, sys=1.56%, ctx=2310, majf=0, minf=1 00:22:02.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:02.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:02.489 issued rwts: total=0,5302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:02.489 job8: (groupid=0, jobs=1): err= 0: pid=4181077: Mon Oct 7 07:41:05 2024 00:22:02.489 write: IOPS=566, BW=142MiB/s (149MB/s)(1430MiB/10098msec); 0 zone resets 00:22:02.489 slat (usec): min=28, max=42026, avg=1394.57, stdev=3519.23 00:22:02.489 clat (msec): min=2, max=287, avg=111.52, stdev=64.04 00:22:02.489 lat (msec): min=2, max=289, avg=112.92, stdev=64.97 00:22:02.489 clat percentiles (msec): 00:22:02.489 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 63], 00:22:02.489 | 30.00th=[ 71], 40.00th=[ 79], 50.00th=[ 101], 60.00th=[ 107], 00:22:02.489 | 70.00th=[ 142], 80.00th=[ 186], 90.00th=[ 211], 95.00th=[ 222], 00:22:02.489 | 99.00th=[ 249], 99.50th=[ 268], 99.90th=[ 284], 99.95th=[ 284], 00:22:02.489 | 99.99th=[ 288] 00:22:02.489 bw ( KiB/s): min=69632, max=266240, per=8.17%, avg=144857.45, stdev=62068.76, samples=20 00:22:02.489 iops : min= 272, max= 1040, avg=565.80, stdev=242.47, samples=20 00:22:02.489 lat (msec) : 4=0.42%, 10=1.61%, 20=3.92%, 50=9.14%, 100=35.03% 00:22:02.489 lat (msec) : 250=49.06%, 500=0.82% 00:22:02.489 cpu : usr=1.57%, sys=1.70%, ctx=2812, majf=0, minf=1 00:22:02.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:22:02.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:02.489 issued rwts: total=0,5721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:02.489 job9: (groupid=0, jobs=1): err= 0: pid=4181078: Mon Oct 7 07:41:05 2024 00:22:02.489 write: IOPS=557, BW=139MiB/s (146MB/s)(1407MiB/10103msec); 0 zone resets 00:22:02.489 slat (usec): min=18, max=72780, avg=1555.30, stdev=3859.42 00:22:02.489 clat (msec): min=2, max=309, avg=113.24, stdev=62.93 00:22:02.489 lat (msec): min=2, max=321, avg=114.79, stdev=63.74 00:22:02.489 clat percentiles (msec): 00:22:02.489 | 1.00th=[ 9], 5.00th=[ 25], 10.00th=[ 53], 20.00th=[ 70], 00:22:02.489 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 100], 60.00th=[ 111], 00:22:02.489 | 70.00th=[ 134], 80.00th=[ 161], 90.00th=[ 224], 95.00th=[ 247], 00:22:02.489 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 288], 99.95th=[ 288], 00:22:02.489 | 99.99th=[ 309] 00:22:02.489 bw ( KiB/s): min=61440, max=254976, per=8.03%, avg=142489.60, stdev=57837.54, samples=20 00:22:02.489 iops : min= 240, max= 996, avg=556.60, stdev=225.93, samples=20 00:22:02.489 lat (msec) : 4=0.07%, 10=1.53%, 20=2.58%, 50=5.38%, 100=41.77% 00:22:02.489 lat (msec) : 250=44.73%, 500=3.94% 00:22:02.489 cpu : usr=1.31%, sys=1.63%, ctx=2154, majf=0, minf=1 00:22:02.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:22:02.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:02.490 issued rwts: total=0,5629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.490 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:02.490 job10: (groupid=0, jobs=1): err= 0: pid=4181079: Mon Oct 7 07:41:05 2024 00:22:02.490 write: IOPS=645, BW=161MiB/s (169MB/s)(1628MiB/10097msec); 0 zone resets 00:22:02.490 slat (usec): min=19, max=78404, avg=1279.77, stdev=3508.78 00:22:02.490 clat (msec): min=2, max=283, avg=97.62, stdev=62.76 00:22:02.490 lat (msec): min=2, max=295, avg=98.90, stdev=63.60 00:22:02.490 clat percentiles (msec): 00:22:02.490 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 37], 20.00th=[ 43], 00:22:02.490 | 30.00th=[ 54], 40.00th=[ 73], 50.00th=[ 88], 60.00th=[ 103], 00:22:02.490 | 70.00th=[ 107], 80.00th=[ 133], 90.00th=[ 211], 95.00th=[ 230], 00:22:02.490 | 99.00th=[ 268], 99.50th=[ 279], 99.90th=[ 284], 99.95th=[ 284], 00:22:02.490 | 99.99th=[ 284] 00:22:02.490 bw ( KiB/s): min=68608, max=352256, per=9.31%, avg=165120.00, stdev=75260.88, samples=20 00:22:02.490 iops : min= 268, max= 1376, avg=645.00, stdev=293.99, samples=20 00:22:02.490 lat (msec) : 4=0.06%, 10=1.52%, 20=4.15%, 50=22.16%, 100=30.06% 00:22:02.490 lat (msec) : 250=40.14%, 500=1.92% 00:22:02.490 cpu : usr=1.77%, sys=1.87%, ctx=2795, majf=0, minf=1 00:22:02.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:02.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:02.490 issued rwts: total=0,6513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.490 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:02.490 00:22:02.490 Run status group 0 (all jobs): 00:22:02.490 WRITE: bw=1732MiB/s (1816MB/s), 131MiB/s-188MiB/s (137MB/s-198MB/s), io=17.1GiB (18.3GB), run=10064-10104msec 00:22:02.490 00:22:02.490 Disk stats (read/write): 00:22:02.490 nvme0n1: ios=49/12092, merge=0/0, ticks=70/1214291, in_queue=1214361, util=97.62% 00:22:02.490 nvme10n1: ios=49/13645, merge=0/0, ticks=65/1212462, in_queue=1212527, util=97.58% 00:22:02.490 nvme1n1: ios=49/12985, merge=0/0, ticks=43/1211713, in_queue=1211756, util=97.74% 00:22:02.490 nvme2n1: ios=49/15012, merge=0/0, ticks=2562/1191765, in_queue=1194327, util=99.69% 00:22:02.490 nvme3n1: ios=48/13040, merge=0/0, ticks=1401/1205626, in_queue=1207027, util=99.78% 00:22:02.490 nvme4n1: ios=49/14959, merge=0/0, ticks=132/1219240, in_queue=1219372, util=98.80% 00:22:02.490 nvme5n1: ios=46/10351, merge=0/0, ticks=1326/1215861, in_queue=1217187, util=100.00% 00:22:02.490 nvme6n1: ios=26/10337, merge=0/0, ticks=42/1216947, in_queue=1216989, util=98.48% 00:22:02.490 nvme7n1: ios=0/11223, merge=0/0, ticks=0/1215045, in_queue=1215045, util=98.75% 00:22:02.490 nvme8n1: ios=46/11041, merge=0/0, ticks=1330/1210080, in_queue=1211410, util=99.89% 00:22:02.490 nvme9n1: ios=48/12804, merge=0/0, ticks=437/1209687, in_queue=1210124, util=100.00% 00:22:02.490 07:41:05 -- target/multiconnection.sh@36 -- # sync 00:22:02.490 07:41:05 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:02.490 07:41:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:02.490 07:41:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:02.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:02.490 07:41:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:02.490 07:41:05 -- common/autotest_common.sh@1198 -- # local i=0 00:22:02.490 07:41:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:02.490 07:41:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:22:02.490 07:41:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:02.490 07:41:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:22:02.490 07:41:05 -- common/autotest_common.sh@1210 -- # return 0 00:22:02.490 07:41:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:02.490 07:41:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.490 07:41:05 -- common/autotest_common.sh@10 -- # set +x 00:22:02.490 07:41:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.490 07:41:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:02.490 07:41:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:02.490 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:02.490 07:41:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:02.490 07:41:06 -- common/autotest_common.sh@1198 -- # local i=0 00:22:02.490 07:41:06 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:22:02.490 07:41:06 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:02.490 07:41:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:02.490 07:41:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:22:02.490 07:41:06 -- common/autotest_common.sh@1210 -- # return 0 00:22:02.490 07:41:06 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:02.490 07:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.490 07:41:06 -- common/autotest_common.sh@10 -- # set +x 00:22:02.490 07:41:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.490 07:41:06 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:02.490 07:41:06 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:02.749 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:02.749 07:41:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:02.749 07:41:06 -- common/autotest_common.sh@1198 -- # local i=0 00:22:02.749 07:41:06 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:02.749 07:41:06 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:22:02.749 07:41:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:22:02.749 07:41:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:02.749 07:41:06 -- common/autotest_common.sh@1210 -- # return 0 00:22:02.749 07:41:06 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:02.749 07:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.749 07:41:06 -- common/autotest_common.sh@10 -- # set +x 00:22:02.749 07:41:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.749 07:41:06 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:02.749 07:41:06 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:03.008 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:03.008 07:41:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:03.008 07:41:06 -- common/autotest_common.sh@1198 -- # local i=0 00:22:03.008 07:41:06 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:03.008 07:41:06 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:22:03.008 07:41:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:22:03.008 07:41:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:03.008 07:41:06 -- common/autotest_common.sh@1210 -- # return 0 00:22:03.008 07:41:06 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:03.008 07:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.008 07:41:06 -- common/autotest_common.sh@10 -- # set +x 00:22:03.008 07:41:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.008 07:41:06 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.008 07:41:06 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:03.008 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:03.008 07:41:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:03.008 07:41:06 -- common/autotest_common.sh@1198 -- # local i=0 00:22:03.008 07:41:06 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:03.008 07:41:06 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:22:03.008 07:41:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:03.008 07:41:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:22:03.008 07:41:06 -- common/autotest_common.sh@1210 -- # return 0 00:22:03.008 07:41:06 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:03.008 07:41:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.008 07:41:06 -- common/autotest_common.sh@10 -- # set +x 00:22:03.268 07:41:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.268 07:41:06 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.268 07:41:06 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:03.268 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:03.268 07:41:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:03.268 07:41:07 -- common/autotest_common.sh@1198 -- # local i=0 00:22:03.268 07:41:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:03.268 07:41:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:22:03.268 07:41:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:03.268 07:41:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:22:03.268 07:41:07 -- common/autotest_common.sh@1210 -- # return 0 00:22:03.268 07:41:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:03.268 07:41:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.268 07:41:07 -- common/autotest_common.sh@10 -- # set +x 00:22:03.268 07:41:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.268 07:41:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.268 07:41:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:03.527 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:03.527 07:41:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:03.527 07:41:07 -- common/autotest_common.sh@1198 -- # local i=0 00:22:03.527 07:41:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:03.527 07:41:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:22:03.527 07:41:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:03.527 07:41:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:22:03.786 07:41:07 -- common/autotest_common.sh@1210 -- # return 0 00:22:03.786 07:41:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:03.786 07:41:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.786 07:41:07 -- common/autotest_common.sh@10 -- # set +x 00:22:03.786 07:41:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.786 07:41:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.786 07:41:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:03.786 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:03.786 07:41:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:03.786 07:41:07 -- common/autotest_common.sh@1198 -- # local i=0 00:22:03.786 07:41:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:03.786 07:41:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:22:03.786 07:41:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:03.786 07:41:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:22:03.786 07:41:07 -- common/autotest_common.sh@1210 -- # return 0 00:22:03.786 07:41:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:03.786 07:41:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.786 07:41:07 -- common/autotest_common.sh@10 -- # set +x 00:22:03.786 07:41:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.786 07:41:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.786 07:41:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:03.786 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:03.786 07:41:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:03.786 07:41:07 -- common/autotest_common.sh@1198 -- # local i=0 00:22:03.786 07:41:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:03.786 07:41:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:22:03.786 07:41:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:03.786 07:41:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:22:04.045 07:41:07 -- common/autotest_common.sh@1210 -- # return 0 00:22:04.045 07:41:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:04.045 07:41:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.045 07:41:07 -- common/autotest_common.sh@10 -- # set +x 00:22:04.045 07:41:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.045 07:41:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.045 07:41:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:04.045 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:04.045 07:41:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:04.045 07:41:07 -- common/autotest_common.sh@1198 -- # local i=0 00:22:04.045 07:41:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:04.045 07:41:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:22:04.045 07:41:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:04.045 07:41:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:22:04.045 07:41:07 -- common/autotest_common.sh@1210 -- # return 0 00:22:04.045 07:41:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:04.045 07:41:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.045 07:41:07 -- common/autotest_common.sh@10 -- # set +x 00:22:04.045 07:41:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.045 07:41:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.045 07:41:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:04.045 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:04.045 07:41:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:04.045 07:41:07 -- common/autotest_common.sh@1198 -- # local i=0 00:22:04.045 07:41:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:04.045 07:41:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:22:04.045 07:41:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:22:04.045 07:41:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:04.045 07:41:08 -- common/autotest_common.sh@1210 -- # return 0 00:22:04.045 07:41:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:04.045 07:41:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.045 07:41:08 -- common/autotest_common.sh@10 -- # set +x 00:22:04.304 07:41:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.304 07:41:08 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:04.304 07:41:08 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:04.304 07:41:08 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:04.304 07:41:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:04.304 07:41:08 -- nvmf/common.sh@116 -- # sync 00:22:04.304 07:41:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:04.304 07:41:08 -- nvmf/common.sh@119 -- # set +e 00:22:04.304 07:41:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:04.304 07:41:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:04.304 rmmod nvme_tcp 00:22:04.304 rmmod nvme_fabrics 00:22:04.304 rmmod nvme_keyring 00:22:04.304 07:41:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:04.304 07:41:08 -- nvmf/common.sh@123 -- # set -e 00:22:04.304 07:41:08 -- nvmf/common.sh@124 -- # return 0 00:22:04.304 07:41:08 -- nvmf/common.sh@477 -- # '[' -n 4172805 ']' 00:22:04.304 07:41:08 -- nvmf/common.sh@478 -- # killprocess 4172805 00:22:04.304 07:41:08 -- common/autotest_common.sh@926 -- # '[' -z 4172805 ']' 00:22:04.304 07:41:08 -- common/autotest_common.sh@930 -- # kill -0 4172805 00:22:04.304 07:41:08 -- common/autotest_common.sh@931 -- # uname 00:22:04.304 07:41:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:04.304 07:41:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4172805 00:22:04.304 07:41:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:04.304 07:41:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:04.304 07:41:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4172805' 00:22:04.304 killing process with pid 4172805 00:22:04.304 07:41:08 -- common/autotest_common.sh@945 -- # kill 4172805 00:22:04.304 07:41:08 -- common/autotest_common.sh@950 -- # wait 4172805 00:22:04.884 07:41:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:04.884 07:41:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:04.884 07:41:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:04.884 07:41:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:04.884 07:41:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:04.884 07:41:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.884 07:41:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:04.884 07:41:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.789 07:41:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:06.789 00:22:06.789 real 1m10.211s 00:22:06.789 user 4m10.729s 00:22:06.789 sys 0m25.272s 00:22:06.789 07:41:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:06.789 07:41:10 -- common/autotest_common.sh@10 -- # set +x 00:22:06.789 ************************************ 00:22:06.789 END TEST nvmf_multiconnection 00:22:06.789 ************************************ 00:22:06.789 07:41:10 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:06.789 07:41:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:06.789 07:41:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:06.789 07:41:10 -- common/autotest_common.sh@10 -- # set +x 00:22:06.789 ************************************ 00:22:06.789 START TEST nvmf_initiator_timeout 00:22:06.789 ************************************ 00:22:06.789 07:41:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:06.789 * Looking for test storage... 00:22:07.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:07.049 07:41:10 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.049 07:41:10 -- nvmf/common.sh@7 -- # uname -s 00:22:07.049 07:41:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.049 07:41:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.049 07:41:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.049 07:41:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.049 07:41:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.049 07:41:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.049 07:41:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.049 07:41:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.049 07:41:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.049 07:41:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.049 07:41:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:07.049 07:41:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:07.049 07:41:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.049 07:41:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.049 07:41:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.049 07:41:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.049 07:41:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.049 07:41:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.049 07:41:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.049 07:41:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.049 07:41:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.049 07:41:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.049 07:41:10 -- paths/export.sh@5 -- # export PATH 00:22:07.049 07:41:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.049 07:41:10 -- nvmf/common.sh@46 -- # : 0 00:22:07.049 07:41:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:07.049 07:41:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:07.049 07:41:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:07.049 07:41:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.049 07:41:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.049 07:41:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:07.049 07:41:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:07.049 07:41:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:07.049 07:41:10 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:07.049 07:41:10 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:07.049 07:41:10 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:07.049 07:41:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:07.049 07:41:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.049 07:41:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:07.049 07:41:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:07.049 07:41:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:07.049 07:41:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.049 07:41:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:07.049 07:41:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.049 07:41:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:07.049 07:41:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:07.049 07:41:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:07.049 07:41:10 -- common/autotest_common.sh@10 -- # set +x 00:22:12.322 07:41:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:12.322 07:41:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:12.322 07:41:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:12.322 07:41:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:12.322 07:41:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:12.322 07:41:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:12.322 07:41:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:12.322 07:41:16 -- nvmf/common.sh@294 -- # net_devs=() 00:22:12.322 07:41:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:12.322 07:41:16 -- nvmf/common.sh@295 -- # e810=() 00:22:12.322 07:41:16 -- nvmf/common.sh@295 -- # local -ga e810 00:22:12.322 07:41:16 -- nvmf/common.sh@296 -- # x722=() 00:22:12.322 07:41:16 -- nvmf/common.sh@296 -- # local -ga x722 00:22:12.322 07:41:16 -- nvmf/common.sh@297 -- # mlx=() 00:22:12.322 07:41:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:12.322 07:41:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.322 07:41:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.322 07:41:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.322 07:41:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.322 07:41:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.322 07:41:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.322 07:41:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.322 07:41:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.322 07:41:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.322 07:41:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.322 07:41:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.322 07:41:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:12.322 07:41:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:12.322 07:41:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:12.322 07:41:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:12.322 07:41:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:12.322 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:12.322 07:41:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:12.322 07:41:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:12.322 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:12.322 07:41:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:12.322 07:41:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:12.322 07:41:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.322 07:41:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:12.322 07:41:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.322 07:41:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:12.322 Found net devices under 0000:af:00.0: cvl_0_0 00:22:12.322 07:41:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.322 07:41:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:12.322 07:41:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.322 07:41:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:12.322 07:41:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.322 07:41:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:12.322 Found net devices under 0000:af:00.1: cvl_0_1 00:22:12.322 07:41:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.322 07:41:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:12.322 07:41:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:12.322 07:41:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:12.322 07:41:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:12.322 07:41:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.322 07:41:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.322 07:41:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.322 07:41:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:12.322 07:41:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.322 07:41:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.322 07:41:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:12.322 07:41:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.322 07:41:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.322 07:41:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:12.322 07:41:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:12.322 07:41:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.322 07:41:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.581 07:41:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.581 07:41:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.581 07:41:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:12.581 07:41:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.581 07:41:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.581 07:41:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.581 07:41:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:12.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:22:12.581 00:22:12.581 --- 10.0.0.2 ping statistics --- 00:22:12.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.581 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:22:12.581 07:41:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:22:12.581 00:22:12.581 --- 10.0.0.1 ping statistics --- 00:22:12.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.581 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:12.581 07:41:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.581 07:41:16 -- nvmf/common.sh@410 -- # return 0 00:22:12.581 07:41:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:12.581 07:41:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.581 07:41:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:12.581 07:41:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:12.581 07:41:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.581 07:41:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:12.581 07:41:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:12.581 07:41:16 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:12.581 07:41:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:12.581 07:41:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:12.581 07:41:16 -- common/autotest_common.sh@10 -- # set +x 00:22:12.581 07:41:16 -- nvmf/common.sh@469 -- # nvmfpid=4186431 00:22:12.581 07:41:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:12.581 07:41:16 -- nvmf/common.sh@470 -- # waitforlisten 4186431 00:22:12.582 07:41:16 -- common/autotest_common.sh@819 -- # '[' -z 4186431 ']' 00:22:12.582 07:41:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.582 07:41:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:12.582 07:41:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.582 07:41:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:12.582 07:41:16 -- common/autotest_common.sh@10 -- # set +x 00:22:12.582 [2024-10-07 07:41:16.537248] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:12.582 [2024-10-07 07:41:16.537295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.840 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.840 [2024-10-07 07:41:16.597109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:12.840 [2024-10-07 07:41:16.667965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:12.840 [2024-10-07 07:41:16.668084] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.840 [2024-10-07 07:41:16.668092] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.840 [2024-10-07 07:41:16.668098] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.840 [2024-10-07 07:41:16.668153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.840 [2024-10-07 07:41:16.668252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.840 [2024-10-07 07:41:16.668319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:12.840 [2024-10-07 07:41:16.668320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.405 07:41:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:13.406 07:41:17 -- common/autotest_common.sh@852 -- # return 0 00:22:13.406 07:41:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:13.406 07:41:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:13.406 07:41:17 -- common/autotest_common.sh@10 -- # set +x 00:22:13.663 07:41:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.664 07:41:17 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:13.664 07:41:17 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:13.664 07:41:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.664 07:41:17 -- common/autotest_common.sh@10 -- # set +x 00:22:13.664 Malloc0 00:22:13.664 07:41:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.664 07:41:17 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:13.664 07:41:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.664 07:41:17 -- common/autotest_common.sh@10 -- # set +x 00:22:13.664 Delay0 00:22:13.664 07:41:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.664 07:41:17 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:13.664 07:41:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.664 07:41:17 -- common/autotest_common.sh@10 -- # set +x 00:22:13.664 [2024-10-07 07:41:17.422339] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.664 07:41:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.664 07:41:17 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:13.664 07:41:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.664 07:41:17 -- common/autotest_common.sh@10 -- # set +x 00:22:13.664 07:41:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.664 07:41:17 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:13.664 07:41:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.664 07:41:17 -- common/autotest_common.sh@10 -- # set +x 00:22:13.664 07:41:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.664 07:41:17 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.664 07:41:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.664 07:41:17 -- common/autotest_common.sh@10 -- # set +x 00:22:13.664 [2024-10-07 07:41:17.447168] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.664 07:41:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.664 07:41:17 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:14.597 07:41:18 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:14.597 07:41:18 -- common/autotest_common.sh@1177 -- # local i=0 00:22:14.597 07:41:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:14.597 07:41:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:14.597 07:41:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:17.191 07:41:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:17.191 07:41:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:17.191 07:41:20 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:22:17.191 07:41:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:17.191 07:41:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:17.191 07:41:20 -- common/autotest_common.sh@1187 -- # return 0 00:22:17.191 07:41:20 -- target/initiator_timeout.sh@35 -- # fio_pid=4187134 00:22:17.191 07:41:20 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:17.191 07:41:20 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:17.191 [global] 00:22:17.191 thread=1 00:22:17.191 invalidate=1 00:22:17.191 rw=write 00:22:17.191 time_based=1 00:22:17.191 runtime=60 00:22:17.191 ioengine=libaio 00:22:17.191 direct=1 00:22:17.191 bs=4096 00:22:17.191 iodepth=1 00:22:17.191 norandommap=0 00:22:17.191 numjobs=1 00:22:17.191 00:22:17.191 verify_dump=1 00:22:17.191 verify_backlog=512 00:22:17.191 verify_state_save=0 00:22:17.191 do_verify=1 00:22:17.191 verify=crc32c-intel 00:22:17.191 [job0] 00:22:17.191 filename=/dev/nvme0n1 00:22:17.191 Could not set queue depth (nvme0n1) 00:22:17.191 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:17.191 fio-3.35 00:22:17.191 Starting 1 thread 00:22:19.724 07:41:23 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:19.724 07:41:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.724 07:41:23 -- common/autotest_common.sh@10 -- # set +x 00:22:19.724 true 00:22:19.724 07:41:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.724 07:41:23 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:19.724 07:41:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.724 07:41:23 -- common/autotest_common.sh@10 -- # set +x 00:22:19.724 true 00:22:19.724 07:41:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.724 07:41:23 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:19.724 07:41:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.724 07:41:23 -- common/autotest_common.sh@10 -- # set +x 00:22:19.724 true 00:22:19.724 07:41:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.724 07:41:23 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:19.724 07:41:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.724 07:41:23 -- common/autotest_common.sh@10 -- # set +x 00:22:19.724 true 00:22:19.724 07:41:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.724 07:41:23 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:23.012 07:41:26 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:23.012 07:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.012 07:41:26 -- common/autotest_common.sh@10 -- # set +x 00:22:23.012 true 00:22:23.012 07:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.012 07:41:26 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:23.012 07:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.012 07:41:26 -- common/autotest_common.sh@10 -- # set +x 00:22:23.012 true 00:22:23.012 07:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.012 07:41:26 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:23.012 07:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.012 07:41:26 -- common/autotest_common.sh@10 -- # set +x 00:22:23.012 true 00:22:23.012 07:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.012 07:41:26 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:23.012 07:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.012 07:41:26 -- common/autotest_common.sh@10 -- # set +x 00:22:23.012 true 00:22:23.012 07:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.012 07:41:26 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:23.012 07:41:26 -- target/initiator_timeout.sh@54 -- # wait 4187134 00:23:19.228 00:23:19.228 job0: (groupid=0, jobs=1): err= 0: pid=4187275: Mon Oct 7 07:42:21 2024 00:23:19.228 read: IOPS=14, BW=59.2KiB/s (60.6kB/s)(3552KiB/60017msec) 00:23:19.228 slat (usec): min=6, max=6828, avg=23.18, stdev=228.79 00:23:19.228 clat (usec): min=372, max=41341k, avg=67240.82, stdev=1386753.35 00:23:19.228 lat (usec): min=379, max=41341k, avg=67264.00, stdev=1386753.65 00:23:19.228 clat percentiles (usec): 00:23:19.228 | 1.00th=[ 383], 5.00th=[ 392], 10.00th=[ 400], 00:23:19.228 | 20.00th=[ 416], 30.00th=[ 474], 40.00th=[ 482], 00:23:19.228 | 50.00th=[ 611], 60.00th=[ 41157], 70.00th=[ 41157], 00:23:19.228 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:23:19.228 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[17112761], 00:23:19.228 | 99.95th=[17112761], 99.99th=[17112761] 00:23:19.228 write: IOPS=17, BW=68.2KiB/s (69.9kB/s)(4096KiB/60017msec); 0 zone resets 00:23:19.228 slat (usec): min=9, max=27014, avg=37.79, stdev=843.85 00:23:19.228 clat (usec): min=205, max=473, avg=234.06, stdev=16.04 00:23:19.228 lat (usec): min=215, max=27364, avg=271.85, stdev=847.64 00:23:19.228 clat percentiles (usec): 00:23:19.228 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 219], 20.00th=[ 223], 00:23:19.228 | 30.00th=[ 227], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 235], 00:23:19.228 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 260], 00:23:19.228 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 351], 99.95th=[ 474], 00:23:19.228 | 99.99th=[ 474] 00:23:19.228 bw ( KiB/s): min= 1240, max= 4096, per=100.00%, avg=2730.67, stdev=1432.12, samples=3 00:23:19.228 iops : min= 310, max= 1024, avg=682.67, stdev=358.03, samples=3 00:23:19.228 lat (usec) : 250=47.91%, 500=28.71%, 750=0.16% 00:23:19.228 lat (msec) : 50=23.17%, >=2000=0.05% 00:23:19.228 cpu : usr=0.03%, sys=0.05%, ctx=1916, majf=0, minf=1 00:23:19.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:19.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.228 issued rwts: total=888,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:19.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:19.228 00:23:19.228 Run status group 0 (all jobs): 00:23:19.228 READ: bw=59.2KiB/s (60.6kB/s), 59.2KiB/s-59.2KiB/s (60.6kB/s-60.6kB/s), io=3552KiB (3637kB), run=60017-60017msec 00:23:19.228 WRITE: bw=68.2KiB/s (69.9kB/s), 68.2KiB/s-68.2KiB/s (69.9kB/s-69.9kB/s), io=4096KiB (4194kB), run=60017-60017msec 00:23:19.228 00:23:19.228 Disk stats (read/write): 00:23:19.228 nvme0n1: ios=937/1024, merge=0/0, ticks=19782/233, in_queue=20015, util=99.91% 00:23:19.228 07:42:21 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:19.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:19.228 07:42:21 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:19.228 07:42:21 -- common/autotest_common.sh@1198 -- # local i=0 00:23:19.228 07:42:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:19.229 07:42:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:19.229 07:42:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:19.229 07:42:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:19.229 07:42:21 -- common/autotest_common.sh@1210 -- # return 0 00:23:19.229 07:42:21 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:19.229 07:42:21 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:19.229 nvmf hotplug test: fio successful as expected 00:23:19.229 07:42:21 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:19.229 07:42:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:19.229 07:42:21 -- common/autotest_common.sh@10 -- # set +x 00:23:19.229 07:42:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:19.229 07:42:21 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:19.229 07:42:21 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:19.229 07:42:21 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:19.229 07:42:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:19.229 07:42:21 -- nvmf/common.sh@116 -- # sync 00:23:19.229 07:42:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:19.229 07:42:21 -- nvmf/common.sh@119 -- # set +e 00:23:19.229 07:42:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:19.229 07:42:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:19.229 rmmod nvme_tcp 00:23:19.229 rmmod nvme_fabrics 00:23:19.229 rmmod nvme_keyring 00:23:19.229 07:42:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:19.229 07:42:21 -- nvmf/common.sh@123 -- # set -e 00:23:19.229 07:42:21 -- nvmf/common.sh@124 -- # return 0 00:23:19.229 07:42:21 -- nvmf/common.sh@477 -- # '[' -n 4186431 ']' 00:23:19.229 07:42:21 -- nvmf/common.sh@478 -- # killprocess 4186431 00:23:19.229 07:42:21 -- common/autotest_common.sh@926 -- # '[' -z 4186431 ']' 00:23:19.229 07:42:21 -- common/autotest_common.sh@930 -- # kill -0 4186431 00:23:19.229 07:42:21 -- common/autotest_common.sh@931 -- # uname 00:23:19.229 07:42:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:19.229 07:42:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4186431 00:23:19.229 07:42:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:19.229 07:42:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:19.229 07:42:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4186431' 00:23:19.229 killing process with pid 4186431 00:23:19.229 07:42:21 -- common/autotest_common.sh@945 -- # kill 4186431 00:23:19.229 07:42:21 -- common/autotest_common.sh@950 -- # wait 4186431 00:23:19.229 07:42:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:19.229 07:42:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:19.229 07:42:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:19.229 07:42:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.229 07:42:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:19.229 07:42:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.229 07:42:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.229 07:42:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.796 07:42:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:19.796 00:23:19.796 real 1m12.945s 00:23:19.796 user 4m24.622s 00:23:19.796 sys 0m6.443s 00:23:19.796 07:42:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:19.796 07:42:23 -- common/autotest_common.sh@10 -- # set +x 00:23:19.796 ************************************ 00:23:19.796 END TEST nvmf_initiator_timeout 00:23:19.796 ************************************ 00:23:19.796 07:42:23 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:19.796 07:42:23 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:23:19.796 07:42:23 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:23:19.796 07:42:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:19.796 07:42:23 -- common/autotest_common.sh@10 -- # set +x 00:23:25.057 07:42:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:25.057 07:42:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:25.057 07:42:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:25.057 07:42:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:25.057 07:42:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:25.057 07:42:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:25.057 07:42:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:25.057 07:42:28 -- nvmf/common.sh@294 -- # net_devs=() 00:23:25.057 07:42:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:25.057 07:42:28 -- nvmf/common.sh@295 -- # e810=() 00:23:25.057 07:42:28 -- nvmf/common.sh@295 -- # local -ga e810 00:23:25.057 07:42:28 -- nvmf/common.sh@296 -- # x722=() 00:23:25.057 07:42:28 -- nvmf/common.sh@296 -- # local -ga x722 00:23:25.057 07:42:28 -- nvmf/common.sh@297 -- # mlx=() 00:23:25.057 07:42:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:25.057 07:42:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.057 07:42:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.057 07:42:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.057 07:42:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.057 07:42:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.057 07:42:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.057 07:42:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.057 07:42:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.057 07:42:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.057 07:42:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.057 07:42:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.057 07:42:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:25.057 07:42:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:25.057 07:42:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:25.057 07:42:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:25.057 07:42:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:25.057 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:25.057 07:42:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:25.057 07:42:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:25.057 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:25.057 07:42:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:25.057 07:42:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:25.057 07:42:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.057 07:42:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:25.057 07:42:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.057 07:42:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:25.057 Found net devices under 0000:af:00.0: cvl_0_0 00:23:25.057 07:42:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.057 07:42:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:25.057 07:42:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.057 07:42:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:25.057 07:42:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.057 07:42:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:25.057 Found net devices under 0000:af:00.1: cvl_0_1 00:23:25.057 07:42:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.057 07:42:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:25.057 07:42:28 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.057 07:42:28 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:23:25.057 07:42:28 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:25.057 07:42:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:25.057 07:42:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:25.057 07:42:28 -- common/autotest_common.sh@10 -- # set +x 00:23:25.057 ************************************ 00:23:25.057 START TEST nvmf_perf_adq 00:23:25.057 ************************************ 00:23:25.057 07:42:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:25.057 * Looking for test storage... 00:23:25.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:25.057 07:42:28 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.057 07:42:28 -- nvmf/common.sh@7 -- # uname -s 00:23:25.057 07:42:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.057 07:42:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.057 07:42:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.057 07:42:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.057 07:42:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.057 07:42:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.057 07:42:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.057 07:42:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.057 07:42:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.058 07:42:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.058 07:42:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:25.058 07:42:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:25.058 07:42:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.058 07:42:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.058 07:42:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.058 07:42:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.058 07:42:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.058 07:42:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.058 07:42:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.058 07:42:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.058 07:42:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.058 07:42:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.058 07:42:28 -- paths/export.sh@5 -- # export PATH 00:23:25.058 07:42:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.058 07:42:28 -- nvmf/common.sh@46 -- # : 0 00:23:25.058 07:42:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:25.058 07:42:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:25.058 07:42:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:25.058 07:42:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.058 07:42:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.058 07:42:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:25.058 07:42:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:25.058 07:42:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:25.058 07:42:28 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:25.058 07:42:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:25.058 07:42:28 -- common/autotest_common.sh@10 -- # set +x 00:23:30.320 07:42:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:30.320 07:42:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:30.320 07:42:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:30.320 07:42:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:30.320 07:42:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:30.320 07:42:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:30.320 07:42:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:30.320 07:42:34 -- nvmf/common.sh@294 -- # net_devs=() 00:23:30.320 07:42:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:30.320 07:42:34 -- nvmf/common.sh@295 -- # e810=() 00:23:30.320 07:42:34 -- nvmf/common.sh@295 -- # local -ga e810 00:23:30.320 07:42:34 -- nvmf/common.sh@296 -- # x722=() 00:23:30.320 07:42:34 -- nvmf/common.sh@296 -- # local -ga x722 00:23:30.320 07:42:34 -- nvmf/common.sh@297 -- # mlx=() 00:23:30.320 07:42:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:30.320 07:42:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.320 07:42:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.320 07:42:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.320 07:42:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.320 07:42:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.320 07:42:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.320 07:42:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.320 07:42:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.320 07:42:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.320 07:42:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.320 07:42:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.320 07:42:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:30.320 07:42:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:30.320 07:42:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:30.320 07:42:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:30.320 07:42:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:30.320 07:42:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:30.320 07:42:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:30.320 07:42:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:30.320 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:30.320 07:42:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:30.321 07:42:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:30.321 07:42:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.321 07:42:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.321 07:42:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:30.321 07:42:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:30.321 07:42:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:30.321 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:30.321 07:42:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:30.321 07:42:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:30.321 07:42:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.321 07:42:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.321 07:42:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:30.321 07:42:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:30.321 07:42:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:30.321 07:42:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:30.321 07:42:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:30.321 07:42:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.321 07:42:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:30.321 07:42:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.321 07:42:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:30.321 Found net devices under 0000:af:00.0: cvl_0_0 00:23:30.321 07:42:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.321 07:42:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:30.321 07:42:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.321 07:42:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:30.321 07:42:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.321 07:42:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:30.321 Found net devices under 0000:af:00.1: cvl_0_1 00:23:30.321 07:42:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.321 07:42:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:30.321 07:42:34 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.321 07:42:34 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:30.321 07:42:34 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:30.321 07:42:34 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:23:30.321 07:42:34 -- target/perf_adq.sh@52 -- # rmmod ice 00:23:31.396 07:42:35 -- target/perf_adq.sh@53 -- # modprobe ice 00:23:33.301 07:42:37 -- target/perf_adq.sh@54 -- # sleep 5 00:23:38.572 07:42:42 -- target/perf_adq.sh@67 -- # nvmftestinit 00:23:38.572 07:42:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:38.572 07:42:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.572 07:42:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:38.572 07:42:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:38.572 07:42:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:38.572 07:42:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.572 07:42:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.572 07:42:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.572 07:42:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:38.572 07:42:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:38.572 07:42:42 -- common/autotest_common.sh@10 -- # set +x 00:23:38.572 07:42:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:38.572 07:42:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:38.572 07:42:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:38.572 07:42:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:38.572 07:42:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:38.572 07:42:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:38.572 07:42:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:38.572 07:42:42 -- nvmf/common.sh@294 -- # net_devs=() 00:23:38.572 07:42:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:38.572 07:42:42 -- nvmf/common.sh@295 -- # e810=() 00:23:38.572 07:42:42 -- nvmf/common.sh@295 -- # local -ga e810 00:23:38.572 07:42:42 -- nvmf/common.sh@296 -- # x722=() 00:23:38.572 07:42:42 -- nvmf/common.sh@296 -- # local -ga x722 00:23:38.572 07:42:42 -- nvmf/common.sh@297 -- # mlx=() 00:23:38.572 07:42:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:38.572 07:42:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.572 07:42:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.572 07:42:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.572 07:42:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.572 07:42:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.572 07:42:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.572 07:42:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.572 07:42:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.572 07:42:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.572 07:42:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.572 07:42:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.572 07:42:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:38.572 07:42:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:38.572 07:42:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:38.572 07:42:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:38.572 07:42:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:38.572 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:38.572 07:42:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:38.572 07:42:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:38.572 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:38.572 07:42:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:38.572 07:42:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:38.572 07:42:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.572 07:42:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:38.572 07:42:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.572 07:42:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:38.572 Found net devices under 0000:af:00.0: cvl_0_0 00:23:38.572 07:42:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.572 07:42:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:38.572 07:42:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.572 07:42:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:38.572 07:42:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.572 07:42:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:38.572 Found net devices under 0000:af:00.1: cvl_0_1 00:23:38.572 07:42:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.572 07:42:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:38.572 07:42:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:38.572 07:42:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:38.572 07:42:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.572 07:42:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.572 07:42:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.572 07:42:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:38.572 07:42:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.572 07:42:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.572 07:42:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:38.572 07:42:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.572 07:42:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.572 07:42:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:38.572 07:42:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:38.572 07:42:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.572 07:42:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.572 07:42:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.572 07:42:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.572 07:42:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:38.572 07:42:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.572 07:42:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.572 07:42:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.572 07:42:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:38.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:23:38.572 00:23:38.572 --- 10.0.0.2 ping statistics --- 00:23:38.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.572 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:23:38.572 07:42:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:23:38.572 00:23:38.572 --- 10.0.0.1 ping statistics --- 00:23:38.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.572 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:23:38.572 07:42:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.572 07:42:42 -- nvmf/common.sh@410 -- # return 0 00:23:38.572 07:42:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:38.572 07:42:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.572 07:42:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:38.572 07:42:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:38.573 07:42:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.573 07:42:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:38.573 07:42:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:38.573 07:42:42 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:38.573 07:42:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:38.573 07:42:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:38.573 07:42:42 -- common/autotest_common.sh@10 -- # set +x 00:23:38.573 07:42:42 -- nvmf/common.sh@469 -- # nvmfpid=11829 00:23:38.573 07:42:42 -- nvmf/common.sh@470 -- # waitforlisten 11829 00:23:38.573 07:42:42 -- common/autotest_common.sh@819 -- # '[' -z 11829 ']' 00:23:38.573 07:42:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.573 07:42:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:38.573 07:42:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.573 07:42:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:38.573 07:42:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:38.573 07:42:42 -- common/autotest_common.sh@10 -- # set +x 00:23:38.831 [2024-10-07 07:42:42.544743] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:38.831 [2024-10-07 07:42:42.544788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.831 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.831 [2024-10-07 07:42:42.603485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:38.831 [2024-10-07 07:42:42.685635] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:38.831 [2024-10-07 07:42:42.685741] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.831 [2024-10-07 07:42:42.685749] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.831 [2024-10-07 07:42:42.685756] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.831 [2024-10-07 07:42:42.685803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.831 [2024-10-07 07:42:42.685898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.831 [2024-10-07 07:42:42.685919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:38.831 [2024-10-07 07:42:42.685920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.767 07:42:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:39.767 07:42:43 -- common/autotest_common.sh@852 -- # return 0 00:23:39.767 07:42:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:39.767 07:42:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:39.767 07:42:43 -- common/autotest_common.sh@10 -- # set +x 00:23:39.767 07:42:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.767 07:42:43 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:23:39.767 07:42:43 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:39.767 07:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:39.767 07:42:43 -- common/autotest_common.sh@10 -- # set +x 00:23:39.767 07:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:39.767 07:42:43 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:23:39.767 07:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:39.767 07:42:43 -- common/autotest_common.sh@10 -- # set +x 00:23:39.767 07:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:39.767 07:42:43 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:39.767 07:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:39.767 07:42:43 -- common/autotest_common.sh@10 -- # set +x 00:23:39.767 [2024-10-07 07:42:43.517365] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.767 07:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:39.767 07:42:43 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:39.767 07:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:39.767 07:42:43 -- common/autotest_common.sh@10 -- # set +x 00:23:39.767 Malloc1 00:23:39.767 07:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:39.767 07:42:43 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.767 07:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:39.767 07:42:43 -- common/autotest_common.sh@10 -- # set +x 00:23:39.767 07:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:39.767 07:42:43 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:39.767 07:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:39.767 07:42:43 -- common/autotest_common.sh@10 -- # set +x 00:23:39.767 07:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:39.767 07:42:43 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.767 07:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:39.767 07:42:43 -- common/autotest_common.sh@10 -- # set +x 00:23:39.767 [2024-10-07 07:42:43.568940] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.767 07:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:39.767 07:42:43 -- target/perf_adq.sh@73 -- # perfpid=11988 00:23:39.767 07:42:43 -- target/perf_adq.sh@74 -- # sleep 2 00:23:39.767 07:42:43 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:39.767 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.665 07:42:45 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:23:41.665 07:42:45 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:41.665 07:42:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:41.665 07:42:45 -- target/perf_adq.sh@76 -- # wc -l 00:23:41.665 07:42:45 -- common/autotest_common.sh@10 -- # set +x 00:23:41.665 07:42:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:41.665 07:42:45 -- target/perf_adq.sh@76 -- # count=4 00:23:41.665 07:42:45 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:23:41.665 07:42:45 -- target/perf_adq.sh@81 -- # wait 11988 00:23:51.636 Initializing NVMe Controllers 00:23:51.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:51.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:51.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:51.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:51.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:51.637 Initialization complete. Launching workers. 00:23:51.637 ======================================================== 00:23:51.637 Latency(us) 00:23:51.637 Device Information : IOPS MiB/s Average min max 00:23:51.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11095.30 43.34 5786.80 912.63 45572.36 00:23:51.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11028.90 43.08 5808.88 872.88 43082.37 00:23:51.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11139.10 43.51 5745.13 884.61 10394.24 00:23:51.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11132.60 43.49 5748.81 1014.34 11252.86 00:23:51.637 ======================================================== 00:23:51.637 Total : 44395.90 173.42 5772.30 872.88 45572.36 00:23:51.637 00:23:51.637 07:42:53 -- target/perf_adq.sh@82 -- # nvmftestfini 00:23:51.637 07:42:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:51.637 07:42:53 -- nvmf/common.sh@116 -- # sync 00:23:51.637 07:42:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:51.637 07:42:53 -- nvmf/common.sh@119 -- # set +e 00:23:51.637 07:42:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:51.637 07:42:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:51.637 rmmod nvme_tcp 00:23:51.637 rmmod nvme_fabrics 00:23:51.637 rmmod nvme_keyring 00:23:51.637 07:42:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:51.637 07:42:53 -- nvmf/common.sh@123 -- # set -e 00:23:51.637 07:42:53 -- nvmf/common.sh@124 -- # return 0 00:23:51.637 07:42:53 -- nvmf/common.sh@477 -- # '[' -n 11829 ']' 00:23:51.637 07:42:53 -- nvmf/common.sh@478 -- # killprocess 11829 00:23:51.637 07:42:53 -- common/autotest_common.sh@926 -- # '[' -z 11829 ']' 00:23:51.637 07:42:53 -- common/autotest_common.sh@930 -- # kill -0 11829 00:23:51.637 07:42:53 -- common/autotest_common.sh@931 -- # uname 00:23:51.637 07:42:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:51.637 07:42:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 11829 00:23:51.637 07:42:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:51.637 07:42:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:51.637 07:42:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 11829' 00:23:51.637 killing process with pid 11829 00:23:51.637 07:42:53 -- common/autotest_common.sh@945 -- # kill 11829 00:23:51.637 07:42:53 -- common/autotest_common.sh@950 -- # wait 11829 00:23:51.637 07:42:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:51.637 07:42:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:51.637 07:42:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:51.637 07:42:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:51.637 07:42:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:51.637 07:42:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.637 07:42:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.637 07:42:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.576 07:42:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:52.576 07:42:56 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:23:52.576 07:42:56 -- target/perf_adq.sh@52 -- # rmmod ice 00:23:53.513 07:42:57 -- target/perf_adq.sh@53 -- # modprobe ice 00:23:55.419 07:42:59 -- target/perf_adq.sh@54 -- # sleep 5 00:24:00.699 07:43:04 -- target/perf_adq.sh@87 -- # nvmftestinit 00:24:00.699 07:43:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:00.699 07:43:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.699 07:43:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:00.699 07:43:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:00.699 07:43:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:00.699 07:43:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.699 07:43:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.699 07:43:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.699 07:43:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:00.699 07:43:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:00.699 07:43:04 -- common/autotest_common.sh@10 -- # set +x 00:24:00.699 07:43:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:00.699 07:43:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:00.699 07:43:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:00.699 07:43:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:00.699 07:43:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:00.699 07:43:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:00.699 07:43:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:00.699 07:43:04 -- nvmf/common.sh@294 -- # net_devs=() 00:24:00.699 07:43:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:00.699 07:43:04 -- nvmf/common.sh@295 -- # e810=() 00:24:00.699 07:43:04 -- nvmf/common.sh@295 -- # local -ga e810 00:24:00.699 07:43:04 -- nvmf/common.sh@296 -- # x722=() 00:24:00.699 07:43:04 -- nvmf/common.sh@296 -- # local -ga x722 00:24:00.699 07:43:04 -- nvmf/common.sh@297 -- # mlx=() 00:24:00.699 07:43:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:00.699 07:43:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.699 07:43:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.699 07:43:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.699 07:43:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.699 07:43:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.699 07:43:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.699 07:43:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.699 07:43:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.699 07:43:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.699 07:43:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.699 07:43:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.699 07:43:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:00.699 07:43:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:00.699 07:43:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:00.699 07:43:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:00.699 07:43:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:00.699 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:00.699 07:43:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:00.699 07:43:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:00.699 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:00.699 07:43:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:00.699 07:43:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:00.699 07:43:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.699 07:43:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:00.699 07:43:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.699 07:43:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:00.699 Found net devices under 0000:af:00.0: cvl_0_0 00:24:00.699 07:43:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.699 07:43:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:00.699 07:43:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.699 07:43:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:00.699 07:43:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.699 07:43:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:00.699 Found net devices under 0000:af:00.1: cvl_0_1 00:24:00.699 07:43:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.699 07:43:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:00.699 07:43:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:00.699 07:43:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:00.699 07:43:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.699 07:43:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.699 07:43:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.699 07:43:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:00.699 07:43:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.699 07:43:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.699 07:43:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:00.699 07:43:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.699 07:43:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.699 07:43:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:00.699 07:43:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:00.699 07:43:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.699 07:43:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.699 07:43:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.699 07:43:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.699 07:43:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:00.699 07:43:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.699 07:43:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.699 07:43:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.699 07:43:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:00.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:24:00.699 00:24:00.699 --- 10.0.0.2 ping statistics --- 00:24:00.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.699 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:24:00.699 07:43:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:24:00.699 00:24:00.699 --- 10.0.0.1 ping statistics --- 00:24:00.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.699 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:24:00.699 07:43:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.699 07:43:04 -- nvmf/common.sh@410 -- # return 0 00:24:00.699 07:43:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:00.699 07:43:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.699 07:43:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:00.699 07:43:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.699 07:43:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:00.699 07:43:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:00.699 07:43:04 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:24:00.699 07:43:04 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:00.699 07:43:04 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:00.699 07:43:04 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:00.699 net.core.busy_poll = 1 00:24:00.699 07:43:04 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:00.699 net.core.busy_read = 1 00:24:00.699 07:43:04 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:00.699 07:43:04 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:00.958 07:43:04 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:00.958 07:43:04 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:00.958 07:43:04 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:00.958 07:43:04 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:00.958 07:43:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:00.958 07:43:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:00.958 07:43:04 -- common/autotest_common.sh@10 -- # set +x 00:24:00.958 07:43:04 -- nvmf/common.sh@469 -- # nvmfpid=15848 00:24:00.958 07:43:04 -- nvmf/common.sh@470 -- # waitforlisten 15848 00:24:00.958 07:43:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:00.958 07:43:04 -- common/autotest_common.sh@819 -- # '[' -z 15848 ']' 00:24:00.958 07:43:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.958 07:43:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:00.958 07:43:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.958 07:43:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:00.958 07:43:04 -- common/autotest_common.sh@10 -- # set +x 00:24:00.958 [2024-10-07 07:43:04.907940] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:00.959 [2024-10-07 07:43:04.907984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.217 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.217 [2024-10-07 07:43:04.964534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:01.217 [2024-10-07 07:43:05.039889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:01.217 [2024-10-07 07:43:05.039999] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.217 [2024-10-07 07:43:05.040008] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.217 [2024-10-07 07:43:05.040014] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.217 [2024-10-07 07:43:05.040056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.217 [2024-10-07 07:43:05.040082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.217 [2024-10-07 07:43:05.040142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.217 [2024-10-07 07:43:05.040143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.786 07:43:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:01.786 07:43:05 -- common/autotest_common.sh@852 -- # return 0 00:24:01.786 07:43:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:01.786 07:43:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:01.786 07:43:05 -- common/autotest_common.sh@10 -- # set +x 00:24:02.045 07:43:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.045 07:43:05 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:24:02.045 07:43:05 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:02.045 07:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.045 07:43:05 -- common/autotest_common.sh@10 -- # set +x 00:24:02.045 07:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.045 07:43:05 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:02.045 07:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.045 07:43:05 -- common/autotest_common.sh@10 -- # set +x 00:24:02.045 07:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.045 07:43:05 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:02.045 07:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.045 07:43:05 -- common/autotest_common.sh@10 -- # set +x 00:24:02.045 [2024-10-07 07:43:05.859675] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.045 07:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.045 07:43:05 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:02.045 07:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.045 07:43:05 -- common/autotest_common.sh@10 -- # set +x 00:24:02.045 Malloc1 00:24:02.045 07:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.045 07:43:05 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:02.045 07:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.045 07:43:05 -- common/autotest_common.sh@10 -- # set +x 00:24:02.045 07:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.045 07:43:05 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:02.045 07:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.045 07:43:05 -- common/autotest_common.sh@10 -- # set +x 00:24:02.045 07:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.045 07:43:05 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.045 07:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.045 07:43:05 -- common/autotest_common.sh@10 -- # set +x 00:24:02.045 [2024-10-07 07:43:05.906952] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.045 07:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.045 07:43:05 -- target/perf_adq.sh@94 -- # perfpid=16088 00:24:02.045 07:43:05 -- target/perf_adq.sh@95 -- # sleep 2 00:24:02.045 07:43:05 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:02.045 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.571 07:43:07 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:24:04.571 07:43:07 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:04.571 07:43:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:04.571 07:43:07 -- target/perf_adq.sh@97 -- # wc -l 00:24:04.571 07:43:07 -- common/autotest_common.sh@10 -- # set +x 00:24:04.571 07:43:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:04.571 07:43:07 -- target/perf_adq.sh@97 -- # count=2 00:24:04.571 07:43:07 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:24:04.571 07:43:07 -- target/perf_adq.sh@103 -- # wait 16088 00:24:12.693 Initializing NVMe Controllers 00:24:12.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:12.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:12.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:12.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:12.693 Initialization complete. Launching workers. 00:24:12.693 ======================================================== 00:24:12.693 Latency(us) 00:24:12.693 Device Information : IOPS MiB/s Average min max 00:24:12.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8111.00 31.68 7914.24 1271.52 51433.74 00:24:12.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8936.40 34.91 7162.41 1350.31 51187.86 00:24:12.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8732.90 34.11 7329.14 1336.43 52333.05 00:24:12.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8509.10 33.24 7521.76 1068.65 52456.69 00:24:12.693 ======================================================== 00:24:12.693 Total : 34289.40 133.94 7471.89 1068.65 52456.69 00:24:12.693 00:24:12.693 07:43:16 -- target/perf_adq.sh@104 -- # nvmftestfini 00:24:12.693 07:43:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:12.693 07:43:16 -- nvmf/common.sh@116 -- # sync 00:24:12.693 07:43:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:12.693 07:43:16 -- nvmf/common.sh@119 -- # set +e 00:24:12.693 07:43:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:12.693 07:43:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:12.693 rmmod nvme_tcp 00:24:12.693 rmmod nvme_fabrics 00:24:12.693 rmmod nvme_keyring 00:24:12.693 07:43:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:12.693 07:43:16 -- nvmf/common.sh@123 -- # set -e 00:24:12.693 07:43:16 -- nvmf/common.sh@124 -- # return 0 00:24:12.693 07:43:16 -- nvmf/common.sh@477 -- # '[' -n 15848 ']' 00:24:12.693 07:43:16 -- nvmf/common.sh@478 -- # killprocess 15848 00:24:12.693 07:43:16 -- common/autotest_common.sh@926 -- # '[' -z 15848 ']' 00:24:12.693 07:43:16 -- common/autotest_common.sh@930 -- # kill -0 15848 00:24:12.693 07:43:16 -- common/autotest_common.sh@931 -- # uname 00:24:12.693 07:43:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:12.693 07:43:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 15848 00:24:12.693 07:43:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:12.693 07:43:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:12.693 07:43:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 15848' 00:24:12.693 killing process with pid 15848 00:24:12.693 07:43:16 -- common/autotest_common.sh@945 -- # kill 15848 00:24:12.693 07:43:16 -- common/autotest_common.sh@950 -- # wait 15848 00:24:12.693 07:43:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:12.693 07:43:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:12.693 07:43:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:12.693 07:43:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.693 07:43:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:12.693 07:43:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.693 07:43:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.693 07:43:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.609 07:43:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:14.609 07:43:18 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:24:14.609 00:24:14.609 real 0m49.682s 00:24:14.609 user 2m48.610s 00:24:14.609 sys 0m10.056s 00:24:14.609 07:43:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:14.609 07:43:18 -- common/autotest_common.sh@10 -- # set +x 00:24:14.609 ************************************ 00:24:14.609 END TEST nvmf_perf_adq 00:24:14.609 ************************************ 00:24:14.609 07:43:18 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:14.609 07:43:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:14.609 07:43:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:14.609 07:43:18 -- common/autotest_common.sh@10 -- # set +x 00:24:14.609 ************************************ 00:24:14.609 START TEST nvmf_shutdown 00:24:14.609 ************************************ 00:24:14.609 07:43:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:14.869 * Looking for test storage... 00:24:14.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:14.869 07:43:18 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.869 07:43:18 -- nvmf/common.sh@7 -- # uname -s 00:24:14.869 07:43:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.869 07:43:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.869 07:43:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.869 07:43:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.869 07:43:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.869 07:43:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.869 07:43:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.869 07:43:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.869 07:43:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.869 07:43:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.869 07:43:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:14.869 07:43:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:14.869 07:43:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.869 07:43:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.869 07:43:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.869 07:43:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.869 07:43:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.869 07:43:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.869 07:43:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.869 07:43:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.869 07:43:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.869 07:43:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.869 07:43:18 -- paths/export.sh@5 -- # export PATH 00:24:14.869 07:43:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.869 07:43:18 -- nvmf/common.sh@46 -- # : 0 00:24:14.869 07:43:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:14.869 07:43:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:14.869 07:43:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:14.869 07:43:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.869 07:43:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.869 07:43:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:14.869 07:43:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:14.869 07:43:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:14.869 07:43:18 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:14.869 07:43:18 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:14.869 07:43:18 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:14.869 07:43:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:14.869 07:43:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:14.869 07:43:18 -- common/autotest_common.sh@10 -- # set +x 00:24:14.869 ************************************ 00:24:14.869 START TEST nvmf_shutdown_tc1 00:24:14.869 ************************************ 00:24:14.869 07:43:18 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:24:14.869 07:43:18 -- target/shutdown.sh@74 -- # starttarget 00:24:14.869 07:43:18 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:14.869 07:43:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:14.869 07:43:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.869 07:43:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:14.869 07:43:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:14.869 07:43:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:14.869 07:43:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.869 07:43:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.869 07:43:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.869 07:43:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:14.869 07:43:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:14.869 07:43:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:14.869 07:43:18 -- common/autotest_common.sh@10 -- # set +x 00:24:20.145 07:43:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:20.145 07:43:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:20.145 07:43:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:20.145 07:43:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:20.145 07:43:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:20.145 07:43:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:20.145 07:43:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:20.145 07:43:23 -- nvmf/common.sh@294 -- # net_devs=() 00:24:20.145 07:43:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:20.145 07:43:23 -- nvmf/common.sh@295 -- # e810=() 00:24:20.145 07:43:23 -- nvmf/common.sh@295 -- # local -ga e810 00:24:20.145 07:43:23 -- nvmf/common.sh@296 -- # x722=() 00:24:20.145 07:43:23 -- nvmf/common.sh@296 -- # local -ga x722 00:24:20.145 07:43:23 -- nvmf/common.sh@297 -- # mlx=() 00:24:20.145 07:43:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:20.145 07:43:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.145 07:43:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.145 07:43:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.145 07:43:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.145 07:43:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.145 07:43:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.145 07:43:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.145 07:43:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.145 07:43:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.145 07:43:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.145 07:43:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.145 07:43:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:20.145 07:43:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:20.145 07:43:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:20.145 07:43:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:20.145 07:43:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:20.145 07:43:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:20.145 07:43:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:20.145 07:43:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:20.145 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:20.145 07:43:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:20.145 07:43:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:20.146 07:43:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.146 07:43:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.146 07:43:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:20.146 07:43:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:20.146 07:43:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:20.146 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:20.146 07:43:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:20.146 07:43:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:20.146 07:43:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.146 07:43:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.146 07:43:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:20.146 07:43:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:20.146 07:43:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:20.146 07:43:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:20.146 07:43:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:20.146 07:43:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.146 07:43:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:20.146 07:43:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.146 07:43:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:20.146 Found net devices under 0000:af:00.0: cvl_0_0 00:24:20.146 07:43:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.146 07:43:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:20.146 07:43:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.146 07:43:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:20.146 07:43:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.146 07:43:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:20.146 Found net devices under 0000:af:00.1: cvl_0_1 00:24:20.146 07:43:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.146 07:43:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:20.146 07:43:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:20.146 07:43:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:20.146 07:43:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:20.146 07:43:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:20.146 07:43:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.146 07:43:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.146 07:43:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.146 07:43:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:20.146 07:43:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.146 07:43:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.146 07:43:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:20.146 07:43:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.146 07:43:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.146 07:43:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:20.146 07:43:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:20.146 07:43:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.146 07:43:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.146 07:43:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.146 07:43:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.146 07:43:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:20.146 07:43:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.146 07:43:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.146 07:43:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.146 07:43:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:20.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:24:20.146 00:24:20.146 --- 10.0.0.2 ping statistics --- 00:24:20.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.146 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:24:20.146 07:43:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:24:20.146 00:24:20.146 --- 10.0.0.1 ping statistics --- 00:24:20.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.146 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:24:20.146 07:43:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.146 07:43:24 -- nvmf/common.sh@410 -- # return 0 00:24:20.146 07:43:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:20.146 07:43:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.146 07:43:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:20.146 07:43:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:20.146 07:43:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.146 07:43:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:20.146 07:43:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:20.146 07:43:24 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:20.146 07:43:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:20.146 07:43:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:20.146 07:43:24 -- common/autotest_common.sh@10 -- # set +x 00:24:20.146 07:43:24 -- nvmf/common.sh@469 -- # nvmfpid=21237 00:24:20.146 07:43:24 -- nvmf/common.sh@470 -- # waitforlisten 21237 00:24:20.146 07:43:24 -- common/autotest_common.sh@819 -- # '[' -z 21237 ']' 00:24:20.146 07:43:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.146 07:43:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:20.146 07:43:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:20.146 07:43:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.146 07:43:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:20.146 07:43:24 -- common/autotest_common.sh@10 -- # set +x 00:24:20.406 [2024-10-07 07:43:24.142526] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:20.406 [2024-10-07 07:43:24.142569] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.406 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.406 [2024-10-07 07:43:24.200651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.406 [2024-10-07 07:43:24.277215] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:20.406 [2024-10-07 07:43:24.277321] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.406 [2024-10-07 07:43:24.277329] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.406 [2024-10-07 07:43:24.277336] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.407 [2024-10-07 07:43:24.277380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.407 [2024-10-07 07:43:24.277405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.407 [2024-10-07 07:43:24.277514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.407 [2024-10-07 07:43:24.277515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:21.343 07:43:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:21.343 07:43:24 -- common/autotest_common.sh@852 -- # return 0 00:24:21.343 07:43:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:21.343 07:43:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:21.343 07:43:24 -- common/autotest_common.sh@10 -- # set +x 00:24:21.343 07:43:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.343 07:43:24 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:21.343 07:43:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.343 07:43:24 -- common/autotest_common.sh@10 -- # set +x 00:24:21.343 [2024-10-07 07:43:25.004421] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.343 07:43:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.343 07:43:25 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:21.343 07:43:25 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:21.343 07:43:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:21.343 07:43:25 -- common/autotest_common.sh@10 -- # set +x 00:24:21.343 07:43:25 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:21.343 07:43:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.343 07:43:25 -- target/shutdown.sh@28 -- # cat 00:24:21.343 07:43:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.343 07:43:25 -- target/shutdown.sh@28 -- # cat 00:24:21.343 07:43:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.343 07:43:25 -- target/shutdown.sh@28 -- # cat 00:24:21.343 07:43:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.343 07:43:25 -- target/shutdown.sh@28 -- # cat 00:24:21.343 07:43:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.343 07:43:25 -- target/shutdown.sh@28 -- # cat 00:24:21.343 07:43:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.343 07:43:25 -- target/shutdown.sh@28 -- # cat 00:24:21.343 07:43:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.343 07:43:25 -- target/shutdown.sh@28 -- # cat 00:24:21.343 07:43:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.343 07:43:25 -- target/shutdown.sh@28 -- # cat 00:24:21.343 07:43:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.343 07:43:25 -- target/shutdown.sh@28 -- # cat 00:24:21.343 07:43:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.343 07:43:25 -- target/shutdown.sh@28 -- # cat 00:24:21.343 07:43:25 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:21.343 07:43:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.343 07:43:25 -- common/autotest_common.sh@10 -- # set +x 00:24:21.343 Malloc1 00:24:21.343 [2024-10-07 07:43:25.099771] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.343 Malloc2 00:24:21.343 Malloc3 00:24:21.343 Malloc4 00:24:21.343 Malloc5 00:24:21.343 Malloc6 00:24:21.602 Malloc7 00:24:21.602 Malloc8 00:24:21.602 Malloc9 00:24:21.602 Malloc10 00:24:21.602 07:43:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.602 07:43:25 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:21.602 07:43:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:21.602 07:43:25 -- common/autotest_common.sh@10 -- # set +x 00:24:21.602 07:43:25 -- target/shutdown.sh@78 -- # perfpid=21521 00:24:21.602 07:43:25 -- target/shutdown.sh@79 -- # waitforlisten 21521 /var/tmp/bdevperf.sock 00:24:21.603 07:43:25 -- common/autotest_common.sh@819 -- # '[' -z 21521 ']' 00:24:21.603 07:43:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.603 07:43:25 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:21.603 07:43:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:21.603 07:43:25 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:21.603 07:43:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.603 07:43:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:21.603 07:43:25 -- nvmf/common.sh@520 -- # config=() 00:24:21.603 07:43:25 -- common/autotest_common.sh@10 -- # set +x 00:24:21.603 07:43:25 -- nvmf/common.sh@520 -- # local subsystem config 00:24:21.603 07:43:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:21.603 07:43:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:21.603 { 00:24:21.603 "params": { 00:24:21.603 "name": "Nvme$subsystem", 00:24:21.603 "trtype": "$TEST_TRANSPORT", 00:24:21.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.603 "adrfam": "ipv4", 00:24:21.603 "trsvcid": "$NVMF_PORT", 00:24:21.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.603 "hdgst": ${hdgst:-false}, 00:24:21.603 "ddgst": ${ddgst:-false} 00:24:21.603 }, 00:24:21.603 "method": "bdev_nvme_attach_controller" 00:24:21.603 } 00:24:21.603 EOF 00:24:21.603 )") 00:24:21.603 07:43:25 -- nvmf/common.sh@542 -- # cat 00:24:21.603 07:43:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:21.603 07:43:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:21.603 { 00:24:21.603 "params": { 00:24:21.603 "name": "Nvme$subsystem", 00:24:21.603 "trtype": "$TEST_TRANSPORT", 00:24:21.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.603 "adrfam": "ipv4", 00:24:21.603 "trsvcid": "$NVMF_PORT", 00:24:21.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.603 "hdgst": ${hdgst:-false}, 00:24:21.603 "ddgst": ${ddgst:-false} 00:24:21.603 }, 00:24:21.603 "method": "bdev_nvme_attach_controller" 00:24:21.603 } 00:24:21.603 EOF 00:24:21.603 )") 00:24:21.603 07:43:25 -- nvmf/common.sh@542 -- # cat 00:24:21.603 07:43:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:21.603 07:43:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:21.603 { 00:24:21.603 "params": { 00:24:21.603 "name": "Nvme$subsystem", 00:24:21.603 "trtype": "$TEST_TRANSPORT", 00:24:21.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.603 "adrfam": "ipv4", 00:24:21.603 "trsvcid": "$NVMF_PORT", 00:24:21.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.603 "hdgst": ${hdgst:-false}, 00:24:21.603 "ddgst": ${ddgst:-false} 00:24:21.603 }, 00:24:21.603 "method": "bdev_nvme_attach_controller" 00:24:21.603 } 00:24:21.603 EOF 00:24:21.603 )") 00:24:21.603 07:43:25 -- nvmf/common.sh@542 -- # cat 00:24:21.603 07:43:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:21.603 07:43:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:21.603 { 00:24:21.603 "params": { 00:24:21.603 "name": "Nvme$subsystem", 00:24:21.603 "trtype": "$TEST_TRANSPORT", 00:24:21.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.603 "adrfam": "ipv4", 00:24:21.603 "trsvcid": "$NVMF_PORT", 00:24:21.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.603 "hdgst": ${hdgst:-false}, 00:24:21.603 "ddgst": ${ddgst:-false} 00:24:21.603 }, 00:24:21.603 "method": "bdev_nvme_attach_controller" 00:24:21.603 } 00:24:21.603 EOF 00:24:21.603 )") 00:24:21.603 07:43:25 -- nvmf/common.sh@542 -- # cat 00:24:21.603 07:43:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:21.603 07:43:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:21.603 { 00:24:21.603 "params": { 00:24:21.603 "name": "Nvme$subsystem", 00:24:21.603 "trtype": "$TEST_TRANSPORT", 00:24:21.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.603 "adrfam": "ipv4", 00:24:21.603 "trsvcid": "$NVMF_PORT", 00:24:21.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.603 "hdgst": ${hdgst:-false}, 00:24:21.603 "ddgst": ${ddgst:-false} 00:24:21.603 }, 00:24:21.603 "method": "bdev_nvme_attach_controller" 00:24:21.603 } 00:24:21.603 EOF 00:24:21.603 )") 00:24:21.603 07:43:25 -- nvmf/common.sh@542 -- # cat 00:24:21.603 07:43:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:21.603 07:43:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:21.603 { 00:24:21.603 "params": { 00:24:21.603 "name": "Nvme$subsystem", 00:24:21.603 "trtype": "$TEST_TRANSPORT", 00:24:21.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.603 "adrfam": "ipv4", 00:24:21.603 "trsvcid": "$NVMF_PORT", 00:24:21.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.603 "hdgst": ${hdgst:-false}, 00:24:21.603 "ddgst": ${ddgst:-false} 00:24:21.603 }, 00:24:21.603 "method": "bdev_nvme_attach_controller" 00:24:21.603 } 00:24:21.603 EOF 00:24:21.603 )") 00:24:21.603 07:43:25 -- nvmf/common.sh@542 -- # cat 00:24:21.603 [2024-10-07 07:43:25.572592] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:21.603 [2024-10-07 07:43:25.572638] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:21.603 07:43:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:21.863 07:43:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:21.863 { 00:24:21.863 "params": { 00:24:21.863 "name": "Nvme$subsystem", 00:24:21.863 "trtype": "$TEST_TRANSPORT", 00:24:21.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.863 "adrfam": "ipv4", 00:24:21.863 "trsvcid": "$NVMF_PORT", 00:24:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.863 "hdgst": ${hdgst:-false}, 00:24:21.863 "ddgst": ${ddgst:-false} 00:24:21.863 }, 00:24:21.863 "method": "bdev_nvme_attach_controller" 00:24:21.863 } 00:24:21.863 EOF 00:24:21.863 )") 00:24:21.863 07:43:25 -- nvmf/common.sh@542 -- # cat 00:24:21.863 07:43:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:21.863 07:43:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:21.863 { 00:24:21.863 "params": { 00:24:21.863 "name": "Nvme$subsystem", 00:24:21.863 "trtype": "$TEST_TRANSPORT", 00:24:21.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.863 "adrfam": "ipv4", 00:24:21.863 "trsvcid": "$NVMF_PORT", 00:24:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.863 "hdgst": ${hdgst:-false}, 00:24:21.863 "ddgst": ${ddgst:-false} 00:24:21.863 }, 00:24:21.863 "method": "bdev_nvme_attach_controller" 00:24:21.863 } 00:24:21.863 EOF 00:24:21.863 )") 00:24:21.863 07:43:25 -- nvmf/common.sh@542 -- # cat 00:24:21.863 07:43:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:21.863 07:43:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:21.863 { 00:24:21.863 "params": { 00:24:21.863 "name": "Nvme$subsystem", 00:24:21.863 "trtype": "$TEST_TRANSPORT", 00:24:21.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.863 "adrfam": "ipv4", 00:24:21.863 "trsvcid": "$NVMF_PORT", 00:24:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.863 "hdgst": ${hdgst:-false}, 00:24:21.863 "ddgst": ${ddgst:-false} 00:24:21.863 }, 00:24:21.863 "method": "bdev_nvme_attach_controller" 00:24:21.863 } 00:24:21.863 EOF 00:24:21.863 )") 00:24:21.863 07:43:25 -- nvmf/common.sh@542 -- # cat 00:24:21.863 07:43:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:21.863 07:43:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:21.863 { 00:24:21.863 "params": { 00:24:21.863 "name": "Nvme$subsystem", 00:24:21.863 "trtype": "$TEST_TRANSPORT", 00:24:21.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.863 "adrfam": "ipv4", 00:24:21.863 "trsvcid": "$NVMF_PORT", 00:24:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.863 "hdgst": ${hdgst:-false}, 00:24:21.863 "ddgst": ${ddgst:-false} 00:24:21.863 }, 00:24:21.863 "method": "bdev_nvme_attach_controller" 00:24:21.863 } 00:24:21.863 EOF 00:24:21.863 )") 00:24:21.863 07:43:25 -- nvmf/common.sh@542 -- # cat 00:24:21.863 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.863 07:43:25 -- nvmf/common.sh@544 -- # jq . 00:24:21.863 07:43:25 -- nvmf/common.sh@545 -- # IFS=, 00:24:21.863 07:43:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:21.863 "params": { 00:24:21.863 "name": "Nvme1", 00:24:21.863 "trtype": "tcp", 00:24:21.863 "traddr": "10.0.0.2", 00:24:21.863 "adrfam": "ipv4", 00:24:21.863 "trsvcid": "4420", 00:24:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.863 "hdgst": false, 00:24:21.863 "ddgst": false 00:24:21.863 }, 00:24:21.863 "method": "bdev_nvme_attach_controller" 00:24:21.863 },{ 00:24:21.863 "params": { 00:24:21.863 "name": "Nvme2", 00:24:21.863 "trtype": "tcp", 00:24:21.863 "traddr": "10.0.0.2", 00:24:21.863 "adrfam": "ipv4", 00:24:21.863 "trsvcid": "4420", 00:24:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:21.863 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:21.863 "hdgst": false, 00:24:21.863 "ddgst": false 00:24:21.863 }, 00:24:21.863 "method": "bdev_nvme_attach_controller" 00:24:21.863 },{ 00:24:21.863 "params": { 00:24:21.863 "name": "Nvme3", 00:24:21.863 "trtype": "tcp", 00:24:21.863 "traddr": "10.0.0.2", 00:24:21.863 "adrfam": "ipv4", 00:24:21.863 "trsvcid": "4420", 00:24:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:21.863 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:21.863 "hdgst": false, 00:24:21.863 "ddgst": false 00:24:21.863 }, 00:24:21.863 "method": "bdev_nvme_attach_controller" 00:24:21.863 },{ 00:24:21.863 "params": { 00:24:21.863 "name": "Nvme4", 00:24:21.863 "trtype": "tcp", 00:24:21.863 "traddr": "10.0.0.2", 00:24:21.863 "adrfam": "ipv4", 00:24:21.863 "trsvcid": "4420", 00:24:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:21.863 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:21.863 "hdgst": false, 00:24:21.863 "ddgst": false 00:24:21.863 }, 00:24:21.863 "method": "bdev_nvme_attach_controller" 00:24:21.863 },{ 00:24:21.863 "params": { 00:24:21.863 "name": "Nvme5", 00:24:21.863 "trtype": "tcp", 00:24:21.863 "traddr": "10.0.0.2", 00:24:21.863 "adrfam": "ipv4", 00:24:21.863 "trsvcid": "4420", 00:24:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:21.863 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:21.863 "hdgst": false, 00:24:21.863 "ddgst": false 00:24:21.863 }, 00:24:21.863 "method": "bdev_nvme_attach_controller" 00:24:21.863 },{ 00:24:21.863 "params": { 00:24:21.863 "name": "Nvme6", 00:24:21.863 "trtype": "tcp", 00:24:21.863 "traddr": "10.0.0.2", 00:24:21.863 "adrfam": "ipv4", 00:24:21.863 "trsvcid": "4420", 00:24:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:21.863 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:21.863 "hdgst": false, 00:24:21.863 "ddgst": false 00:24:21.863 }, 00:24:21.863 "method": "bdev_nvme_attach_controller" 00:24:21.863 },{ 00:24:21.863 "params": { 00:24:21.863 "name": "Nvme7", 00:24:21.863 "trtype": "tcp", 00:24:21.863 "traddr": "10.0.0.2", 00:24:21.863 "adrfam": "ipv4", 00:24:21.863 "trsvcid": "4420", 00:24:21.863 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:21.863 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:21.863 "hdgst": false, 00:24:21.863 "ddgst": false 00:24:21.863 }, 00:24:21.863 "method": "bdev_nvme_attach_controller" 00:24:21.863 },{ 00:24:21.863 "params": { 00:24:21.863 "name": "Nvme8", 00:24:21.864 "trtype": "tcp", 00:24:21.864 "traddr": "10.0.0.2", 00:24:21.864 "adrfam": "ipv4", 00:24:21.864 "trsvcid": "4420", 00:24:21.864 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:21.864 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:21.864 "hdgst": false, 00:24:21.864 "ddgst": false 00:24:21.864 }, 00:24:21.864 "method": "bdev_nvme_attach_controller" 00:24:21.864 },{ 00:24:21.864 "params": { 00:24:21.864 "name": "Nvme9", 00:24:21.864 "trtype": "tcp", 00:24:21.864 "traddr": "10.0.0.2", 00:24:21.864 "adrfam": "ipv4", 00:24:21.864 "trsvcid": "4420", 00:24:21.864 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:21.864 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:21.864 "hdgst": false, 00:24:21.864 "ddgst": false 00:24:21.864 }, 00:24:21.864 "method": "bdev_nvme_attach_controller" 00:24:21.864 },{ 00:24:21.864 "params": { 00:24:21.864 "name": "Nvme10", 00:24:21.864 "trtype": "tcp", 00:24:21.864 "traddr": "10.0.0.2", 00:24:21.864 "adrfam": "ipv4", 00:24:21.864 "trsvcid": "4420", 00:24:21.864 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:21.864 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:21.864 "hdgst": false, 00:24:21.864 "ddgst": false 00:24:21.864 }, 00:24:21.864 "method": "bdev_nvme_attach_controller" 00:24:21.864 }' 00:24:21.864 [2024-10-07 07:43:25.629594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.864 [2024-10-07 07:43:25.698031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.242 07:43:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:23.242 07:43:27 -- common/autotest_common.sh@852 -- # return 0 00:24:23.242 07:43:27 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:23.242 07:43:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:23.242 07:43:27 -- common/autotest_common.sh@10 -- # set +x 00:24:23.242 07:43:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:23.242 07:43:27 -- target/shutdown.sh@83 -- # kill -9 21521 00:24:23.242 07:43:27 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:23.242 07:43:27 -- target/shutdown.sh@87 -- # sleep 1 00:24:24.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 21521 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:24.181 07:43:28 -- target/shutdown.sh@88 -- # kill -0 21237 00:24:24.181 07:43:28 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:24.181 07:43:28 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:24.181 07:43:28 -- nvmf/common.sh@520 -- # config=() 00:24:24.181 07:43:28 -- nvmf/common.sh@520 -- # local subsystem config 00:24:24.181 07:43:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:24.181 07:43:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:24.181 { 00:24:24.181 "params": { 00:24:24.181 "name": "Nvme$subsystem", 00:24:24.181 "trtype": "$TEST_TRANSPORT", 00:24:24.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.181 "adrfam": "ipv4", 00:24:24.181 "trsvcid": "$NVMF_PORT", 00:24:24.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.181 "hdgst": ${hdgst:-false}, 00:24:24.181 "ddgst": ${ddgst:-false} 00:24:24.181 }, 00:24:24.181 "method": "bdev_nvme_attach_controller" 00:24:24.181 } 00:24:24.181 EOF 00:24:24.181 )") 00:24:24.181 07:43:28 -- nvmf/common.sh@542 -- # cat 00:24:24.181 07:43:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:24.181 07:43:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:24.181 { 00:24:24.181 "params": { 00:24:24.181 "name": "Nvme$subsystem", 00:24:24.181 "trtype": "$TEST_TRANSPORT", 00:24:24.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.181 "adrfam": "ipv4", 00:24:24.181 "trsvcid": "$NVMF_PORT", 00:24:24.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.181 "hdgst": ${hdgst:-false}, 00:24:24.181 "ddgst": ${ddgst:-false} 00:24:24.181 }, 00:24:24.181 "method": "bdev_nvme_attach_controller" 00:24:24.181 } 00:24:24.181 EOF 00:24:24.181 )") 00:24:24.181 07:43:28 -- nvmf/common.sh@542 -- # cat 00:24:24.181 07:43:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:24.181 07:43:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:24.181 { 00:24:24.181 "params": { 00:24:24.181 "name": "Nvme$subsystem", 00:24:24.181 "trtype": "$TEST_TRANSPORT", 00:24:24.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.181 "adrfam": "ipv4", 00:24:24.181 "trsvcid": "$NVMF_PORT", 00:24:24.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.181 "hdgst": ${hdgst:-false}, 00:24:24.181 "ddgst": ${ddgst:-false} 00:24:24.181 }, 00:24:24.181 "method": "bdev_nvme_attach_controller" 00:24:24.181 } 00:24:24.181 EOF 00:24:24.181 )") 00:24:24.181 07:43:28 -- nvmf/common.sh@542 -- # cat 00:24:24.181 07:43:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:24.181 07:43:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:24.181 { 00:24:24.181 "params": { 00:24:24.181 "name": "Nvme$subsystem", 00:24:24.181 "trtype": "$TEST_TRANSPORT", 00:24:24.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.182 "adrfam": "ipv4", 00:24:24.182 "trsvcid": "$NVMF_PORT", 00:24:24.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.182 "hdgst": ${hdgst:-false}, 00:24:24.182 "ddgst": ${ddgst:-false} 00:24:24.182 }, 00:24:24.182 "method": "bdev_nvme_attach_controller" 00:24:24.182 } 00:24:24.182 EOF 00:24:24.182 )") 00:24:24.182 07:43:28 -- nvmf/common.sh@542 -- # cat 00:24:24.182 07:43:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:24.182 07:43:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:24.182 { 00:24:24.182 "params": { 00:24:24.182 "name": "Nvme$subsystem", 00:24:24.182 "trtype": "$TEST_TRANSPORT", 00:24:24.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.182 "adrfam": "ipv4", 00:24:24.182 "trsvcid": "$NVMF_PORT", 00:24:24.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.182 "hdgst": ${hdgst:-false}, 00:24:24.182 "ddgst": ${ddgst:-false} 00:24:24.182 }, 00:24:24.182 "method": "bdev_nvme_attach_controller" 00:24:24.182 } 00:24:24.182 EOF 00:24:24.182 )") 00:24:24.182 07:43:28 -- nvmf/common.sh@542 -- # cat 00:24:24.182 07:43:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:24.182 07:43:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:24.182 { 00:24:24.182 "params": { 00:24:24.182 "name": "Nvme$subsystem", 00:24:24.182 "trtype": "$TEST_TRANSPORT", 00:24:24.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.182 "adrfam": "ipv4", 00:24:24.182 "trsvcid": "$NVMF_PORT", 00:24:24.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.182 "hdgst": ${hdgst:-false}, 00:24:24.182 "ddgst": ${ddgst:-false} 00:24:24.182 }, 00:24:24.182 "method": "bdev_nvme_attach_controller" 00:24:24.182 } 00:24:24.182 EOF 00:24:24.182 )") 00:24:24.182 07:43:28 -- nvmf/common.sh@542 -- # cat 00:24:24.182 [2024-10-07 07:43:28.127880] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:24.182 [2024-10-07 07:43:28.127927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid21998 ] 00:24:24.182 07:43:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:24.182 07:43:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:24.182 { 00:24:24.182 "params": { 00:24:24.182 "name": "Nvme$subsystem", 00:24:24.182 "trtype": "$TEST_TRANSPORT", 00:24:24.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.182 "adrfam": "ipv4", 00:24:24.182 "trsvcid": "$NVMF_PORT", 00:24:24.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.182 "hdgst": ${hdgst:-false}, 00:24:24.182 "ddgst": ${ddgst:-false} 00:24:24.182 }, 00:24:24.182 "method": "bdev_nvme_attach_controller" 00:24:24.182 } 00:24:24.182 EOF 00:24:24.182 )") 00:24:24.182 07:43:28 -- nvmf/common.sh@542 -- # cat 00:24:24.182 07:43:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:24.182 07:43:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:24.182 { 00:24:24.182 "params": { 00:24:24.182 "name": "Nvme$subsystem", 00:24:24.182 "trtype": "$TEST_TRANSPORT", 00:24:24.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.182 "adrfam": "ipv4", 00:24:24.182 "trsvcid": "$NVMF_PORT", 00:24:24.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.182 "hdgst": ${hdgst:-false}, 00:24:24.182 "ddgst": ${ddgst:-false} 00:24:24.182 }, 00:24:24.182 "method": "bdev_nvme_attach_controller" 00:24:24.182 } 00:24:24.182 EOF 00:24:24.182 )") 00:24:24.182 07:43:28 -- nvmf/common.sh@542 -- # cat 00:24:24.182 07:43:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:24.182 07:43:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:24.182 { 00:24:24.182 "params": { 00:24:24.182 "name": "Nvme$subsystem", 00:24:24.182 "trtype": "$TEST_TRANSPORT", 00:24:24.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.182 "adrfam": "ipv4", 00:24:24.182 "trsvcid": "$NVMF_PORT", 00:24:24.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.182 "hdgst": ${hdgst:-false}, 00:24:24.182 "ddgst": ${ddgst:-false} 00:24:24.182 }, 00:24:24.182 "method": "bdev_nvme_attach_controller" 00:24:24.182 } 00:24:24.182 EOF 00:24:24.182 )") 00:24:24.182 07:43:28 -- nvmf/common.sh@542 -- # cat 00:24:24.182 07:43:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:24.182 07:43:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:24.182 { 00:24:24.182 "params": { 00:24:24.182 "name": "Nvme$subsystem", 00:24:24.182 "trtype": "$TEST_TRANSPORT", 00:24:24.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.182 "adrfam": "ipv4", 00:24:24.182 "trsvcid": "$NVMF_PORT", 00:24:24.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.182 "hdgst": ${hdgst:-false}, 00:24:24.182 "ddgst": ${ddgst:-false} 00:24:24.182 }, 00:24:24.182 "method": "bdev_nvme_attach_controller" 00:24:24.182 } 00:24:24.182 EOF 00:24:24.182 )") 00:24:24.442 07:43:28 -- nvmf/common.sh@542 -- # cat 00:24:24.442 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.442 07:43:28 -- nvmf/common.sh@544 -- # jq . 00:24:24.442 07:43:28 -- nvmf/common.sh@545 -- # IFS=, 00:24:24.442 07:43:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:24.442 "params": { 00:24:24.442 "name": "Nvme1", 00:24:24.442 "trtype": "tcp", 00:24:24.442 "traddr": "10.0.0.2", 00:24:24.442 "adrfam": "ipv4", 00:24:24.442 "trsvcid": "4420", 00:24:24.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:24.442 "hdgst": false, 00:24:24.442 "ddgst": false 00:24:24.442 }, 00:24:24.442 "method": "bdev_nvme_attach_controller" 00:24:24.442 },{ 00:24:24.442 "params": { 00:24:24.442 "name": "Nvme2", 00:24:24.442 "trtype": "tcp", 00:24:24.442 "traddr": "10.0.0.2", 00:24:24.442 "adrfam": "ipv4", 00:24:24.442 "trsvcid": "4420", 00:24:24.442 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:24.442 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:24.442 "hdgst": false, 00:24:24.442 "ddgst": false 00:24:24.442 }, 00:24:24.442 "method": "bdev_nvme_attach_controller" 00:24:24.442 },{ 00:24:24.442 "params": { 00:24:24.442 "name": "Nvme3", 00:24:24.442 "trtype": "tcp", 00:24:24.442 "traddr": "10.0.0.2", 00:24:24.442 "adrfam": "ipv4", 00:24:24.442 "trsvcid": "4420", 00:24:24.442 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:24.442 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:24.442 "hdgst": false, 00:24:24.442 "ddgst": false 00:24:24.442 }, 00:24:24.442 "method": "bdev_nvme_attach_controller" 00:24:24.442 },{ 00:24:24.442 "params": { 00:24:24.442 "name": "Nvme4", 00:24:24.443 "trtype": "tcp", 00:24:24.443 "traddr": "10.0.0.2", 00:24:24.443 "adrfam": "ipv4", 00:24:24.443 "trsvcid": "4420", 00:24:24.443 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:24.443 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:24.443 "hdgst": false, 00:24:24.443 "ddgst": false 00:24:24.443 }, 00:24:24.443 "method": "bdev_nvme_attach_controller" 00:24:24.443 },{ 00:24:24.443 "params": { 00:24:24.443 "name": "Nvme5", 00:24:24.443 "trtype": "tcp", 00:24:24.443 "traddr": "10.0.0.2", 00:24:24.443 "adrfam": "ipv4", 00:24:24.443 "trsvcid": "4420", 00:24:24.443 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:24.443 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:24.443 "hdgst": false, 00:24:24.443 "ddgst": false 00:24:24.443 }, 00:24:24.443 "method": "bdev_nvme_attach_controller" 00:24:24.443 },{ 00:24:24.443 "params": { 00:24:24.443 "name": "Nvme6", 00:24:24.443 "trtype": "tcp", 00:24:24.443 "traddr": "10.0.0.2", 00:24:24.443 "adrfam": "ipv4", 00:24:24.443 "trsvcid": "4420", 00:24:24.443 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:24.443 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:24.443 "hdgst": false, 00:24:24.443 "ddgst": false 00:24:24.443 }, 00:24:24.443 "method": "bdev_nvme_attach_controller" 00:24:24.443 },{ 00:24:24.443 "params": { 00:24:24.443 "name": "Nvme7", 00:24:24.443 "trtype": "tcp", 00:24:24.443 "traddr": "10.0.0.2", 00:24:24.443 "adrfam": "ipv4", 00:24:24.443 "trsvcid": "4420", 00:24:24.443 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:24.443 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:24.443 "hdgst": false, 00:24:24.443 "ddgst": false 00:24:24.443 }, 00:24:24.443 "method": "bdev_nvme_attach_controller" 00:24:24.443 },{ 00:24:24.443 "params": { 00:24:24.443 "name": "Nvme8", 00:24:24.443 "trtype": "tcp", 00:24:24.443 "traddr": "10.0.0.2", 00:24:24.443 "adrfam": "ipv4", 00:24:24.443 "trsvcid": "4420", 00:24:24.443 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:24.443 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:24.443 "hdgst": false, 00:24:24.443 "ddgst": false 00:24:24.443 }, 00:24:24.443 "method": "bdev_nvme_attach_controller" 00:24:24.443 },{ 00:24:24.443 "params": { 00:24:24.443 "name": "Nvme9", 00:24:24.443 "trtype": "tcp", 00:24:24.443 "traddr": "10.0.0.2", 00:24:24.443 "adrfam": "ipv4", 00:24:24.443 "trsvcid": "4420", 00:24:24.443 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:24.443 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:24.443 "hdgst": false, 00:24:24.443 "ddgst": false 00:24:24.443 }, 00:24:24.443 "method": "bdev_nvme_attach_controller" 00:24:24.443 },{ 00:24:24.443 "params": { 00:24:24.443 "name": "Nvme10", 00:24:24.443 "trtype": "tcp", 00:24:24.443 "traddr": "10.0.0.2", 00:24:24.443 "adrfam": "ipv4", 00:24:24.443 "trsvcid": "4420", 00:24:24.443 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:24.443 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:24.443 "hdgst": false, 00:24:24.443 "ddgst": false 00:24:24.443 }, 00:24:24.443 "method": "bdev_nvme_attach_controller" 00:24:24.443 }' 00:24:24.443 [2024-10-07 07:43:28.185679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.443 [2024-10-07 07:43:28.255724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.824 Running I/O for 1 seconds... 00:24:26.764 00:24:26.764 Latency(us) 00:24:26.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.764 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.764 Verification LBA range: start 0x0 length 0x400 00:24:26.764 Nvme1n1 : 1.05 459.86 28.74 0.00 0.00 136143.14 29959.31 109850.82 00:24:26.764 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.764 Verification LBA range: start 0x0 length 0x400 00:24:26.764 Nvme2n1 : 1.07 487.30 30.46 0.00 0.00 128670.21 16103.13 115842.68 00:24:26.764 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.764 Verification LBA range: start 0x0 length 0x400 00:24:26.764 Nvme3n1 : 1.10 472.55 29.53 0.00 0.00 127184.68 17351.44 103359.63 00:24:26.764 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.764 Verification LBA range: start 0x0 length 0x400 00:24:26.764 Nvme4n1 : 1.06 493.54 30.85 0.00 0.00 125370.39 14917.24 106355.57 00:24:26.764 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.764 Verification LBA range: start 0x0 length 0x400 00:24:26.764 Nvme5n1 : 1.07 488.22 30.51 0.00 0.00 125987.07 16852.11 100363.70 00:24:26.764 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.764 Verification LBA range: start 0x0 length 0x400 00:24:26.764 Nvme6n1 : 1.06 492.18 30.76 0.00 0.00 123998.87 18350.08 97867.09 00:24:26.764 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.764 Verification LBA range: start 0x0 length 0x400 00:24:26.764 Nvme7n1 : 1.07 486.48 30.41 0.00 0.00 125178.96 13793.77 99864.38 00:24:26.764 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.764 Verification LBA range: start 0x0 length 0x400 00:24:26.764 Nvme8n1 : 1.08 489.05 30.57 0.00 0.00 123922.37 10423.34 101861.67 00:24:26.764 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.764 Verification LBA range: start 0x0 length 0x400 00:24:26.764 Nvme9n1 : 1.08 485.36 30.33 0.00 0.00 124291.52 3698.10 111848.11 00:24:26.764 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:26.764 Verification LBA range: start 0x0 length 0x400 00:24:26.764 Nvme10n1 : 1.12 467.03 29.19 0.00 0.00 123942.14 10423.34 105856.24 00:24:26.764 =================================================================================================================== 00:24:26.764 Total : 4821.58 301.35 0.00 0.00 126393.80 3698.10 115842.68 00:24:27.024 07:43:30 -- target/shutdown.sh@93 -- # stoptarget 00:24:27.024 07:43:30 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:27.024 07:43:30 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:27.024 07:43:30 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:27.024 07:43:30 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:27.024 07:43:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:27.024 07:43:30 -- nvmf/common.sh@116 -- # sync 00:24:27.024 07:43:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:27.024 07:43:30 -- nvmf/common.sh@119 -- # set +e 00:24:27.024 07:43:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:27.024 07:43:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:27.024 rmmod nvme_tcp 00:24:27.024 rmmod nvme_fabrics 00:24:27.024 rmmod nvme_keyring 00:24:27.024 07:43:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:27.024 07:43:30 -- nvmf/common.sh@123 -- # set -e 00:24:27.024 07:43:30 -- nvmf/common.sh@124 -- # return 0 00:24:27.024 07:43:30 -- nvmf/common.sh@477 -- # '[' -n 21237 ']' 00:24:27.024 07:43:30 -- nvmf/common.sh@478 -- # killprocess 21237 00:24:27.024 07:43:30 -- common/autotest_common.sh@926 -- # '[' -z 21237 ']' 00:24:27.024 07:43:30 -- common/autotest_common.sh@930 -- # kill -0 21237 00:24:27.024 07:43:30 -- common/autotest_common.sh@931 -- # uname 00:24:27.024 07:43:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:27.024 07:43:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 21237 00:24:27.024 07:43:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:27.024 07:43:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:27.024 07:43:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 21237' 00:24:27.024 killing process with pid 21237 00:24:27.024 07:43:30 -- common/autotest_common.sh@945 -- # kill 21237 00:24:27.024 07:43:30 -- common/autotest_common.sh@950 -- # wait 21237 00:24:27.593 07:43:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:27.593 07:43:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:27.593 07:43:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:27.593 07:43:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:27.593 07:43:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:27.593 07:43:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.593 07:43:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.593 07:43:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.500 07:43:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:29.500 00:24:29.500 real 0m14.778s 00:24:29.500 user 0m33.671s 00:24:29.500 sys 0m5.425s 00:24:29.500 07:43:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:29.500 07:43:33 -- common/autotest_common.sh@10 -- # set +x 00:24:29.500 ************************************ 00:24:29.500 END TEST nvmf_shutdown_tc1 00:24:29.500 ************************************ 00:24:29.500 07:43:33 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:29.500 07:43:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:29.500 07:43:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:29.500 07:43:33 -- common/autotest_common.sh@10 -- # set +x 00:24:29.760 ************************************ 00:24:29.760 START TEST nvmf_shutdown_tc2 00:24:29.760 ************************************ 00:24:29.760 07:43:33 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:24:29.760 07:43:33 -- target/shutdown.sh@98 -- # starttarget 00:24:29.760 07:43:33 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:29.760 07:43:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:29.760 07:43:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.760 07:43:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:29.760 07:43:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:29.760 07:43:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:29.760 07:43:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.760 07:43:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.760 07:43:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.760 07:43:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:29.760 07:43:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:29.760 07:43:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:29.760 07:43:33 -- common/autotest_common.sh@10 -- # set +x 00:24:29.760 07:43:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:29.760 07:43:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:29.760 07:43:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:29.760 07:43:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:29.760 07:43:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:29.760 07:43:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:29.760 07:43:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:29.760 07:43:33 -- nvmf/common.sh@294 -- # net_devs=() 00:24:29.760 07:43:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:29.760 07:43:33 -- nvmf/common.sh@295 -- # e810=() 00:24:29.760 07:43:33 -- nvmf/common.sh@295 -- # local -ga e810 00:24:29.760 07:43:33 -- nvmf/common.sh@296 -- # x722=() 00:24:29.760 07:43:33 -- nvmf/common.sh@296 -- # local -ga x722 00:24:29.760 07:43:33 -- nvmf/common.sh@297 -- # mlx=() 00:24:29.760 07:43:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:29.760 07:43:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:29.760 07:43:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:29.760 07:43:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:29.760 07:43:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:29.760 07:43:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:29.760 07:43:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:29.760 07:43:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:29.760 07:43:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:29.760 07:43:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:29.760 07:43:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:29.760 07:43:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:29.760 07:43:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:29.760 07:43:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:29.760 07:43:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:29.760 07:43:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:29.760 07:43:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:29.760 07:43:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:29.760 07:43:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:29.760 07:43:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:29.760 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:29.760 07:43:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:29.760 07:43:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:29.760 07:43:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.760 07:43:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.760 07:43:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:29.760 07:43:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:29.760 07:43:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:29.760 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:29.760 07:43:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:29.760 07:43:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:29.760 07:43:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.761 07:43:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.761 07:43:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:29.761 07:43:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:29.761 07:43:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:29.761 07:43:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:29.761 07:43:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:29.761 07:43:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.761 07:43:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:29.761 07:43:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.761 07:43:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:29.761 Found net devices under 0000:af:00.0: cvl_0_0 00:24:29.761 07:43:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.761 07:43:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:29.761 07:43:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.761 07:43:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:29.761 07:43:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.761 07:43:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:29.761 Found net devices under 0000:af:00.1: cvl_0_1 00:24:29.761 07:43:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.761 07:43:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:29.761 07:43:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:29.761 07:43:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:29.761 07:43:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:29.761 07:43:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:29.761 07:43:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:29.761 07:43:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:29.761 07:43:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:29.761 07:43:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:29.761 07:43:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:29.761 07:43:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:29.761 07:43:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:29.761 07:43:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:29.761 07:43:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:29.761 07:43:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:29.761 07:43:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:29.761 07:43:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:29.761 07:43:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:29.761 07:43:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:29.761 07:43:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:29.761 07:43:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:29.761 07:43:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:29.761 07:43:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:29.761 07:43:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:29.761 07:43:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:29.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:24:29.761 00:24:29.761 --- 10.0.0.2 ping statistics --- 00:24:29.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.761 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:24:29.761 07:43:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:29.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:24:29.761 00:24:29.761 --- 10.0.0.1 ping statistics --- 00:24:29.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.761 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:24:29.761 07:43:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.761 07:43:33 -- nvmf/common.sh@410 -- # return 0 00:24:29.761 07:43:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:29.761 07:43:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.761 07:43:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:29.761 07:43:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:29.761 07:43:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.761 07:43:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:29.761 07:43:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:29.761 07:43:33 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:29.761 07:43:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:29.761 07:43:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:29.761 07:43:33 -- common/autotest_common.sh@10 -- # set +x 00:24:30.021 07:43:33 -- nvmf/common.sh@469 -- # nvmfpid=23013 00:24:30.021 07:43:33 -- nvmf/common.sh@470 -- # waitforlisten 23013 00:24:30.021 07:43:33 -- common/autotest_common.sh@819 -- # '[' -z 23013 ']' 00:24:30.021 07:43:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.021 07:43:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:30.021 07:43:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.021 07:43:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:30.021 07:43:33 -- common/autotest_common.sh@10 -- # set +x 00:24:30.021 07:43:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:30.021 [2024-10-07 07:43:33.780966] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:30.021 [2024-10-07 07:43:33.781010] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.021 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.021 [2024-10-07 07:43:33.843255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:30.021 [2024-10-07 07:43:33.920787] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:30.021 [2024-10-07 07:43:33.920895] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.021 [2024-10-07 07:43:33.920903] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.021 [2024-10-07 07:43:33.920910] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.021 [2024-10-07 07:43:33.921015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.021 [2024-10-07 07:43:33.921106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:30.021 [2024-10-07 07:43:33.921212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.021 [2024-10-07 07:43:33.921213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:30.959 07:43:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:30.959 07:43:34 -- common/autotest_common.sh@852 -- # return 0 00:24:30.959 07:43:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:30.959 07:43:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:30.959 07:43:34 -- common/autotest_common.sh@10 -- # set +x 00:24:30.959 07:43:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.959 07:43:34 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:30.959 07:43:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:30.959 07:43:34 -- common/autotest_common.sh@10 -- # set +x 00:24:30.959 [2024-10-07 07:43:34.628306] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.959 07:43:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:30.959 07:43:34 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:30.959 07:43:34 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:30.959 07:43:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:30.959 07:43:34 -- common/autotest_common.sh@10 -- # set +x 00:24:30.959 07:43:34 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:30.959 07:43:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:30.959 07:43:34 -- target/shutdown.sh@28 -- # cat 00:24:30.959 07:43:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:30.959 07:43:34 -- target/shutdown.sh@28 -- # cat 00:24:30.959 07:43:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:30.959 07:43:34 -- target/shutdown.sh@28 -- # cat 00:24:30.959 07:43:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:30.959 07:43:34 -- target/shutdown.sh@28 -- # cat 00:24:30.959 07:43:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:30.959 07:43:34 -- target/shutdown.sh@28 -- # cat 00:24:30.959 07:43:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:30.959 07:43:34 -- target/shutdown.sh@28 -- # cat 00:24:30.959 07:43:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:30.959 07:43:34 -- target/shutdown.sh@28 -- # cat 00:24:30.959 07:43:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:30.959 07:43:34 -- target/shutdown.sh@28 -- # cat 00:24:30.959 07:43:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:30.959 07:43:34 -- target/shutdown.sh@28 -- # cat 00:24:30.959 07:43:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:30.959 07:43:34 -- target/shutdown.sh@28 -- # cat 00:24:30.959 07:43:34 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:30.959 07:43:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:30.959 07:43:34 -- common/autotest_common.sh@10 -- # set +x 00:24:30.959 Malloc1 00:24:30.959 [2024-10-07 07:43:34.723674] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.959 Malloc2 00:24:30.960 Malloc3 00:24:30.960 Malloc4 00:24:30.960 Malloc5 00:24:30.960 Malloc6 00:24:31.219 Malloc7 00:24:31.219 Malloc8 00:24:31.219 Malloc9 00:24:31.219 Malloc10 00:24:31.219 07:43:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:31.219 07:43:35 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:31.219 07:43:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:31.219 07:43:35 -- common/autotest_common.sh@10 -- # set +x 00:24:31.219 07:43:35 -- target/shutdown.sh@102 -- # perfpid=23291 00:24:31.219 07:43:35 -- target/shutdown.sh@103 -- # waitforlisten 23291 /var/tmp/bdevperf.sock 00:24:31.219 07:43:35 -- common/autotest_common.sh@819 -- # '[' -z 23291 ']' 00:24:31.219 07:43:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.219 07:43:35 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:31.219 07:43:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:31.219 07:43:35 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:31.219 07:43:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.219 07:43:35 -- nvmf/common.sh@520 -- # config=() 00:24:31.219 07:43:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:31.219 07:43:35 -- nvmf/common.sh@520 -- # local subsystem config 00:24:31.220 07:43:35 -- common/autotest_common.sh@10 -- # set +x 00:24:31.220 07:43:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:31.220 07:43:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:31.220 { 00:24:31.220 "params": { 00:24:31.220 "name": "Nvme$subsystem", 00:24:31.220 "trtype": "$TEST_TRANSPORT", 00:24:31.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.220 "adrfam": "ipv4", 00:24:31.220 "trsvcid": "$NVMF_PORT", 00:24:31.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.220 "hdgst": ${hdgst:-false}, 00:24:31.220 "ddgst": ${ddgst:-false} 00:24:31.220 }, 00:24:31.220 "method": "bdev_nvme_attach_controller" 00:24:31.220 } 00:24:31.220 EOF 00:24:31.220 )") 00:24:31.220 07:43:35 -- nvmf/common.sh@542 -- # cat 00:24:31.220 07:43:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:31.220 07:43:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:31.220 { 00:24:31.220 "params": { 00:24:31.220 "name": "Nvme$subsystem", 00:24:31.220 "trtype": "$TEST_TRANSPORT", 00:24:31.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.220 "adrfam": "ipv4", 00:24:31.220 "trsvcid": "$NVMF_PORT", 00:24:31.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.220 "hdgst": ${hdgst:-false}, 00:24:31.220 "ddgst": ${ddgst:-false} 00:24:31.220 }, 00:24:31.220 "method": "bdev_nvme_attach_controller" 00:24:31.220 } 00:24:31.220 EOF 00:24:31.220 )") 00:24:31.220 07:43:35 -- nvmf/common.sh@542 -- # cat 00:24:31.220 07:43:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:31.220 07:43:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:31.220 { 00:24:31.220 "params": { 00:24:31.220 "name": "Nvme$subsystem", 00:24:31.220 "trtype": "$TEST_TRANSPORT", 00:24:31.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.220 "adrfam": "ipv4", 00:24:31.220 "trsvcid": "$NVMF_PORT", 00:24:31.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.220 "hdgst": ${hdgst:-false}, 00:24:31.220 "ddgst": ${ddgst:-false} 00:24:31.220 }, 00:24:31.220 "method": "bdev_nvme_attach_controller" 00:24:31.220 } 00:24:31.220 EOF 00:24:31.220 )") 00:24:31.220 07:43:35 -- nvmf/common.sh@542 -- # cat 00:24:31.220 07:43:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:31.220 07:43:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:31.220 { 00:24:31.220 "params": { 00:24:31.220 "name": "Nvme$subsystem", 00:24:31.220 "trtype": "$TEST_TRANSPORT", 00:24:31.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.220 "adrfam": "ipv4", 00:24:31.220 "trsvcid": "$NVMF_PORT", 00:24:31.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.220 "hdgst": ${hdgst:-false}, 00:24:31.220 "ddgst": ${ddgst:-false} 00:24:31.220 }, 00:24:31.220 "method": "bdev_nvme_attach_controller" 00:24:31.220 } 00:24:31.220 EOF 00:24:31.220 )") 00:24:31.220 07:43:35 -- nvmf/common.sh@542 -- # cat 00:24:31.220 07:43:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:31.220 07:43:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:31.220 { 00:24:31.220 "params": { 00:24:31.220 "name": "Nvme$subsystem", 00:24:31.220 "trtype": "$TEST_TRANSPORT", 00:24:31.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.220 "adrfam": "ipv4", 00:24:31.220 "trsvcid": "$NVMF_PORT", 00:24:31.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.220 "hdgst": ${hdgst:-false}, 00:24:31.220 "ddgst": ${ddgst:-false} 00:24:31.220 }, 00:24:31.220 "method": "bdev_nvme_attach_controller" 00:24:31.220 } 00:24:31.220 EOF 00:24:31.220 )") 00:24:31.220 07:43:35 -- nvmf/common.sh@542 -- # cat 00:24:31.220 07:43:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:31.220 07:43:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:31.220 { 00:24:31.220 "params": { 00:24:31.220 "name": "Nvme$subsystem", 00:24:31.220 "trtype": "$TEST_TRANSPORT", 00:24:31.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.220 "adrfam": "ipv4", 00:24:31.220 "trsvcid": "$NVMF_PORT", 00:24:31.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.220 "hdgst": ${hdgst:-false}, 00:24:31.220 "ddgst": ${ddgst:-false} 00:24:31.220 }, 00:24:31.220 "method": "bdev_nvme_attach_controller" 00:24:31.220 } 00:24:31.220 EOF 00:24:31.220 )") 00:24:31.220 07:43:35 -- nvmf/common.sh@542 -- # cat 00:24:31.481 [2024-10-07 07:43:35.190607] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:31.481 [2024-10-07 07:43:35.190658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid23291 ] 00:24:31.481 07:43:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:31.481 07:43:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:31.481 { 00:24:31.481 "params": { 00:24:31.481 "name": "Nvme$subsystem", 00:24:31.481 "trtype": "$TEST_TRANSPORT", 00:24:31.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.481 "adrfam": "ipv4", 00:24:31.481 "trsvcid": "$NVMF_PORT", 00:24:31.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.481 "hdgst": ${hdgst:-false}, 00:24:31.481 "ddgst": ${ddgst:-false} 00:24:31.481 }, 00:24:31.481 "method": "bdev_nvme_attach_controller" 00:24:31.481 } 00:24:31.481 EOF 00:24:31.481 )") 00:24:31.481 07:43:35 -- nvmf/common.sh@542 -- # cat 00:24:31.481 07:43:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:31.481 07:43:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:31.481 { 00:24:31.481 "params": { 00:24:31.481 "name": "Nvme$subsystem", 00:24:31.481 "trtype": "$TEST_TRANSPORT", 00:24:31.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.481 "adrfam": "ipv4", 00:24:31.481 "trsvcid": "$NVMF_PORT", 00:24:31.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.481 "hdgst": ${hdgst:-false}, 00:24:31.481 "ddgst": ${ddgst:-false} 00:24:31.481 }, 00:24:31.481 "method": "bdev_nvme_attach_controller" 00:24:31.481 } 00:24:31.481 EOF 00:24:31.481 )") 00:24:31.481 07:43:35 -- nvmf/common.sh@542 -- # cat 00:24:31.481 07:43:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:31.481 07:43:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:31.481 { 00:24:31.481 "params": { 00:24:31.481 "name": "Nvme$subsystem", 00:24:31.481 "trtype": "$TEST_TRANSPORT", 00:24:31.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.481 "adrfam": "ipv4", 00:24:31.481 "trsvcid": "$NVMF_PORT", 00:24:31.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.481 "hdgst": ${hdgst:-false}, 00:24:31.481 "ddgst": ${ddgst:-false} 00:24:31.481 }, 00:24:31.481 "method": "bdev_nvme_attach_controller" 00:24:31.481 } 00:24:31.481 EOF 00:24:31.481 )") 00:24:31.481 07:43:35 -- nvmf/common.sh@542 -- # cat 00:24:31.481 07:43:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:31.481 07:43:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:31.481 { 00:24:31.481 "params": { 00:24:31.481 "name": "Nvme$subsystem", 00:24:31.481 "trtype": "$TEST_TRANSPORT", 00:24:31.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.481 "adrfam": "ipv4", 00:24:31.481 "trsvcid": "$NVMF_PORT", 00:24:31.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.481 "hdgst": ${hdgst:-false}, 00:24:31.481 "ddgst": ${ddgst:-false} 00:24:31.481 }, 00:24:31.481 "method": "bdev_nvme_attach_controller" 00:24:31.481 } 00:24:31.481 EOF 00:24:31.481 )") 00:24:31.481 07:43:35 -- nvmf/common.sh@542 -- # cat 00:24:31.481 07:43:35 -- nvmf/common.sh@544 -- # jq . 00:24:31.481 07:43:35 -- nvmf/common.sh@545 -- # IFS=, 00:24:31.481 07:43:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:31.481 "params": { 00:24:31.481 "name": "Nvme1", 00:24:31.481 "trtype": "tcp", 00:24:31.481 "traddr": "10.0.0.2", 00:24:31.481 "adrfam": "ipv4", 00:24:31.481 "trsvcid": "4420", 00:24:31.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:31.481 "hdgst": false, 00:24:31.481 "ddgst": false 00:24:31.481 }, 00:24:31.481 "method": "bdev_nvme_attach_controller" 00:24:31.481 },{ 00:24:31.481 "params": { 00:24:31.481 "name": "Nvme2", 00:24:31.481 "trtype": "tcp", 00:24:31.481 "traddr": "10.0.0.2", 00:24:31.481 "adrfam": "ipv4", 00:24:31.481 "trsvcid": "4420", 00:24:31.481 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:31.481 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:31.481 "hdgst": false, 00:24:31.481 "ddgst": false 00:24:31.481 }, 00:24:31.481 "method": "bdev_nvme_attach_controller" 00:24:31.481 },{ 00:24:31.481 "params": { 00:24:31.481 "name": "Nvme3", 00:24:31.481 "trtype": "tcp", 00:24:31.481 "traddr": "10.0.0.2", 00:24:31.481 "adrfam": "ipv4", 00:24:31.481 "trsvcid": "4420", 00:24:31.481 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:31.481 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:31.481 "hdgst": false, 00:24:31.481 "ddgst": false 00:24:31.481 }, 00:24:31.481 "method": "bdev_nvme_attach_controller" 00:24:31.481 },{ 00:24:31.481 "params": { 00:24:31.481 "name": "Nvme4", 00:24:31.481 "trtype": "tcp", 00:24:31.481 "traddr": "10.0.0.2", 00:24:31.481 "adrfam": "ipv4", 00:24:31.481 "trsvcid": "4420", 00:24:31.481 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:31.481 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:31.481 "hdgst": false, 00:24:31.481 "ddgst": false 00:24:31.481 }, 00:24:31.481 "method": "bdev_nvme_attach_controller" 00:24:31.481 },{ 00:24:31.481 "params": { 00:24:31.481 "name": "Nvme5", 00:24:31.481 "trtype": "tcp", 00:24:31.481 "traddr": "10.0.0.2", 00:24:31.481 "adrfam": "ipv4", 00:24:31.481 "trsvcid": "4420", 00:24:31.481 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:31.481 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:31.481 "hdgst": false, 00:24:31.481 "ddgst": false 00:24:31.481 }, 00:24:31.481 "method": "bdev_nvme_attach_controller" 00:24:31.481 },{ 00:24:31.481 "params": { 00:24:31.481 "name": "Nvme6", 00:24:31.481 "trtype": "tcp", 00:24:31.481 "traddr": "10.0.0.2", 00:24:31.481 "adrfam": "ipv4", 00:24:31.481 "trsvcid": "4420", 00:24:31.481 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:31.481 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:31.481 "hdgst": false, 00:24:31.481 "ddgst": false 00:24:31.481 }, 00:24:31.481 "method": "bdev_nvme_attach_controller" 00:24:31.481 },{ 00:24:31.481 "params": { 00:24:31.481 "name": "Nvme7", 00:24:31.481 "trtype": "tcp", 00:24:31.481 "traddr": "10.0.0.2", 00:24:31.482 "adrfam": "ipv4", 00:24:31.482 "trsvcid": "4420", 00:24:31.482 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:31.482 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:31.482 "hdgst": false, 00:24:31.482 "ddgst": false 00:24:31.482 }, 00:24:31.482 "method": "bdev_nvme_attach_controller" 00:24:31.482 },{ 00:24:31.482 "params": { 00:24:31.482 "name": "Nvme8", 00:24:31.482 "trtype": "tcp", 00:24:31.482 "traddr": "10.0.0.2", 00:24:31.482 "adrfam": "ipv4", 00:24:31.482 "trsvcid": "4420", 00:24:31.482 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:31.482 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:31.482 "hdgst": false, 00:24:31.482 "ddgst": false 00:24:31.482 }, 00:24:31.482 "method": "bdev_nvme_attach_controller" 00:24:31.482 },{ 00:24:31.482 "params": { 00:24:31.482 "name": "Nvme9", 00:24:31.482 "trtype": "tcp", 00:24:31.482 "traddr": "10.0.0.2", 00:24:31.482 "adrfam": "ipv4", 00:24:31.482 "trsvcid": "4420", 00:24:31.482 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:31.482 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:31.482 "hdgst": false, 00:24:31.482 "ddgst": false 00:24:31.482 }, 00:24:31.482 "method": "bdev_nvme_attach_controller" 00:24:31.482 },{ 00:24:31.482 "params": { 00:24:31.482 "name": "Nvme10", 00:24:31.482 "trtype": "tcp", 00:24:31.482 "traddr": "10.0.0.2", 00:24:31.482 "adrfam": "ipv4", 00:24:31.482 "trsvcid": "4420", 00:24:31.482 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:31.482 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:31.482 "hdgst": false, 00:24:31.482 "ddgst": false 00:24:31.482 }, 00:24:31.482 "method": "bdev_nvme_attach_controller" 00:24:31.482 }' 00:24:31.482 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.482 [2024-10-07 07:43:35.267093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.482 [2024-10-07 07:43:35.335760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.391 Running I/O for 10 seconds... 00:24:33.651 07:43:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:33.651 07:43:37 -- common/autotest_common.sh@852 -- # return 0 00:24:33.651 07:43:37 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:33.651 07:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:33.651 07:43:37 -- common/autotest_common.sh@10 -- # set +x 00:24:33.651 07:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:33.651 07:43:37 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:33.651 07:43:37 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:33.651 07:43:37 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:33.651 07:43:37 -- target/shutdown.sh@57 -- # local ret=1 00:24:33.651 07:43:37 -- target/shutdown.sh@58 -- # local i 00:24:33.651 07:43:37 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:33.651 07:43:37 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:33.651 07:43:37 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:33.651 07:43:37 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:33.651 07:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:33.651 07:43:37 -- common/autotest_common.sh@10 -- # set +x 00:24:33.651 07:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:33.651 07:43:37 -- target/shutdown.sh@60 -- # read_io_count=211 00:24:33.651 07:43:37 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:24:33.651 07:43:37 -- target/shutdown.sh@64 -- # ret=0 00:24:33.651 07:43:37 -- target/shutdown.sh@65 -- # break 00:24:33.651 07:43:37 -- target/shutdown.sh@69 -- # return 0 00:24:33.651 07:43:37 -- target/shutdown.sh@109 -- # killprocess 23291 00:24:33.651 07:43:37 -- common/autotest_common.sh@926 -- # '[' -z 23291 ']' 00:24:33.651 07:43:37 -- common/autotest_common.sh@930 -- # kill -0 23291 00:24:33.651 07:43:37 -- common/autotest_common.sh@931 -- # uname 00:24:33.651 07:43:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:33.651 07:43:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 23291 00:24:33.651 07:43:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:33.651 07:43:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:33.651 07:43:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 23291' 00:24:33.651 killing process with pid 23291 00:24:33.651 07:43:37 -- common/autotest_common.sh@945 -- # kill 23291 00:24:33.651 07:43:37 -- common/autotest_common.sh@950 -- # wait 23291 00:24:33.651 Received shutdown signal, test time was about 0.650352 seconds 00:24:33.651 00:24:33.651 Latency(us) 00:24:33.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.651 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.651 Verification LBA range: start 0x0 length 0x400 00:24:33.651 Nvme1n1 : 0.64 490.56 30.66 0.00 0.00 126924.85 20846.69 128825.05 00:24:33.651 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.651 Verification LBA range: start 0x0 length 0x400 00:24:33.651 Nvme2n1 : 0.63 499.49 31.22 0.00 0.00 123373.72 20222.54 97367.77 00:24:33.651 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.651 Verification LBA range: start 0x0 length 0x400 00:24:33.651 Nvme3n1 : 0.63 498.65 31.17 0.00 0.00 122287.90 20347.37 106355.57 00:24:33.651 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.651 Verification LBA range: start 0x0 length 0x400 00:24:33.651 Nvme4n1 : 0.63 497.40 31.09 0.00 0.00 121345.53 20222.54 100863.02 00:24:33.651 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.651 Verification LBA range: start 0x0 length 0x400 00:24:33.651 Nvme5n1 : 0.64 494.25 30.89 0.00 0.00 120648.06 22094.99 93373.20 00:24:33.651 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.651 Verification LBA range: start 0x0 length 0x400 00:24:33.651 Nvme6n1 : 0.64 492.21 30.76 0.00 0.00 119879.66 21845.33 93872.52 00:24:33.651 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.651 Verification LBA range: start 0x0 length 0x400 00:24:33.651 Nvme7n1 : 0.64 488.77 30.55 0.00 0.00 119671.21 20222.54 98366.42 00:24:33.651 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.651 Verification LBA range: start 0x0 length 0x400 00:24:33.651 Nvme8n1 : 0.65 487.36 30.46 0.00 0.00 119051.42 18724.57 99864.38 00:24:33.651 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.651 Verification LBA range: start 0x0 length 0x400 00:24:33.651 Nvme9n1 : 0.65 484.84 30.30 0.00 0.00 118926.19 14917.24 100863.02 00:24:33.651 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:33.651 Verification LBA range: start 0x0 length 0x400 00:24:33.651 Nvme10n1 : 0.65 488.02 30.50 0.00 0.00 116409.16 9674.36 100863.02 00:24:33.651 =================================================================================================================== 00:24:33.651 Total : 4921.55 307.60 0.00 0.00 120850.36 9674.36 128825.05 00:24:33.910 07:43:37 -- target/shutdown.sh@112 -- # sleep 1 00:24:34.853 07:43:38 -- target/shutdown.sh@113 -- # kill -0 23013 00:24:34.853 07:43:38 -- target/shutdown.sh@115 -- # stoptarget 00:24:34.853 07:43:38 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:34.853 07:43:38 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:34.853 07:43:38 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:34.853 07:43:38 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:34.853 07:43:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:34.853 07:43:38 -- nvmf/common.sh@116 -- # sync 00:24:34.853 07:43:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:34.853 07:43:38 -- nvmf/common.sh@119 -- # set +e 00:24:34.853 07:43:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:34.853 07:43:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:35.112 rmmod nvme_tcp 00:24:35.112 rmmod nvme_fabrics 00:24:35.112 rmmod nvme_keyring 00:24:35.112 07:43:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:35.112 07:43:38 -- nvmf/common.sh@123 -- # set -e 00:24:35.112 07:43:38 -- nvmf/common.sh@124 -- # return 0 00:24:35.112 07:43:38 -- nvmf/common.sh@477 -- # '[' -n 23013 ']' 00:24:35.112 07:43:38 -- nvmf/common.sh@478 -- # killprocess 23013 00:24:35.112 07:43:38 -- common/autotest_common.sh@926 -- # '[' -z 23013 ']' 00:24:35.112 07:43:38 -- common/autotest_common.sh@930 -- # kill -0 23013 00:24:35.112 07:43:38 -- common/autotest_common.sh@931 -- # uname 00:24:35.112 07:43:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:35.112 07:43:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 23013 00:24:35.112 07:43:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:35.112 07:43:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:35.112 07:43:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 23013' 00:24:35.112 killing process with pid 23013 00:24:35.112 07:43:38 -- common/autotest_common.sh@945 -- # kill 23013 00:24:35.112 07:43:38 -- common/autotest_common.sh@950 -- # wait 23013 00:24:35.681 07:43:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:35.681 07:43:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:35.681 07:43:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:35.681 07:43:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:35.681 07:43:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:35.681 07:43:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.681 07:43:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.681 07:43:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.587 07:43:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:37.587 00:24:37.587 real 0m7.931s 00:24:37.587 user 0m24.367s 00:24:37.587 sys 0m1.335s 00:24:37.587 07:43:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:37.587 07:43:41 -- common/autotest_common.sh@10 -- # set +x 00:24:37.587 ************************************ 00:24:37.587 END TEST nvmf_shutdown_tc2 00:24:37.587 ************************************ 00:24:37.587 07:43:41 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:37.587 07:43:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:37.587 07:43:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:37.587 07:43:41 -- common/autotest_common.sh@10 -- # set +x 00:24:37.587 ************************************ 00:24:37.587 START TEST nvmf_shutdown_tc3 00:24:37.587 ************************************ 00:24:37.587 07:43:41 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:24:37.587 07:43:41 -- target/shutdown.sh@120 -- # starttarget 00:24:37.587 07:43:41 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:37.587 07:43:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:37.587 07:43:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.587 07:43:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:37.587 07:43:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:37.587 07:43:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:37.587 07:43:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.587 07:43:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:37.587 07:43:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.587 07:43:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:37.587 07:43:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:37.587 07:43:41 -- common/autotest_common.sh@10 -- # set +x 00:24:37.587 07:43:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:37.587 07:43:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:37.587 07:43:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:37.587 07:43:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:37.587 07:43:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:37.587 07:43:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:37.587 07:43:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:37.587 07:43:41 -- nvmf/common.sh@294 -- # net_devs=() 00:24:37.587 07:43:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:37.587 07:43:41 -- nvmf/common.sh@295 -- # e810=() 00:24:37.587 07:43:41 -- nvmf/common.sh@295 -- # local -ga e810 00:24:37.587 07:43:41 -- nvmf/common.sh@296 -- # x722=() 00:24:37.587 07:43:41 -- nvmf/common.sh@296 -- # local -ga x722 00:24:37.587 07:43:41 -- nvmf/common.sh@297 -- # mlx=() 00:24:37.587 07:43:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:37.587 07:43:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.587 07:43:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.587 07:43:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.587 07:43:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.587 07:43:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.587 07:43:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.587 07:43:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.587 07:43:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.587 07:43:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.587 07:43:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.587 07:43:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.587 07:43:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:37.587 07:43:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:37.587 07:43:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:37.587 07:43:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:37.587 07:43:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:37.587 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:37.587 07:43:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:37.587 07:43:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:37.587 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:37.587 07:43:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:37.587 07:43:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:37.587 07:43:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.587 07:43:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:37.587 07:43:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.587 07:43:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:37.587 Found net devices under 0000:af:00.0: cvl_0_0 00:24:37.587 07:43:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.587 07:43:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:37.587 07:43:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.587 07:43:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:37.587 07:43:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.587 07:43:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:37.587 Found net devices under 0000:af:00.1: cvl_0_1 00:24:37.587 07:43:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.587 07:43:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:37.587 07:43:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:37.587 07:43:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:37.587 07:43:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:37.587 07:43:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.587 07:43:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.587 07:43:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.587 07:43:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:37.587 07:43:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.587 07:43:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.587 07:43:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:37.587 07:43:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.587 07:43:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.587 07:43:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:37.587 07:43:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:37.587 07:43:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.587 07:43:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.847 07:43:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.847 07:43:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.847 07:43:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:37.847 07:43:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.847 07:43:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.847 07:43:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.847 07:43:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:37.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:24:37.847 00:24:37.847 --- 10.0.0.2 ping statistics --- 00:24:37.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.847 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:24:37.847 07:43:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:24:37.847 00:24:37.847 --- 10.0.0.1 ping statistics --- 00:24:37.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.847 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:24:37.847 07:43:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.847 07:43:41 -- nvmf/common.sh@410 -- # return 0 00:24:37.847 07:43:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:37.847 07:43:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.847 07:43:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:37.847 07:43:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:37.847 07:43:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.847 07:43:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:37.847 07:43:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:37.847 07:43:41 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:37.847 07:43:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:37.847 07:43:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:37.847 07:43:41 -- common/autotest_common.sh@10 -- # set +x 00:24:37.847 07:43:41 -- nvmf/common.sh@469 -- # nvmfpid=24450 00:24:37.847 07:43:41 -- nvmf/common.sh@470 -- # waitforlisten 24450 00:24:37.847 07:43:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:37.847 07:43:41 -- common/autotest_common.sh@819 -- # '[' -z 24450 ']' 00:24:37.847 07:43:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.847 07:43:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:37.847 07:43:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.847 07:43:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:37.847 07:43:41 -- common/autotest_common.sh@10 -- # set +x 00:24:37.847 [2024-10-07 07:43:41.801191] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:37.847 [2024-10-07 07:43:41.801237] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.106 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.106 [2024-10-07 07:43:41.860140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.106 [2024-10-07 07:43:41.937785] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:38.106 [2024-10-07 07:43:41.937891] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.106 [2024-10-07 07:43:41.937899] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.106 [2024-10-07 07:43:41.937906] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.106 [2024-10-07 07:43:41.938003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.106 [2024-10-07 07:43:41.938088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.106 [2024-10-07 07:43:41.938114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.106 [2024-10-07 07:43:41.938115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:38.673 07:43:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:38.673 07:43:42 -- common/autotest_common.sh@852 -- # return 0 00:24:38.673 07:43:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:38.673 07:43:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:38.673 07:43:42 -- common/autotest_common.sh@10 -- # set +x 00:24:38.932 07:43:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.932 07:43:42 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:38.932 07:43:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:38.932 07:43:42 -- common/autotest_common.sh@10 -- # set +x 00:24:38.933 [2024-10-07 07:43:42.667379] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.933 07:43:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:38.933 07:43:42 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:38.933 07:43:42 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:38.933 07:43:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:38.933 07:43:42 -- common/autotest_common.sh@10 -- # set +x 00:24:38.933 07:43:42 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:38.933 07:43:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.933 07:43:42 -- target/shutdown.sh@28 -- # cat 00:24:38.933 07:43:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.933 07:43:42 -- target/shutdown.sh@28 -- # cat 00:24:38.933 07:43:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.933 07:43:42 -- target/shutdown.sh@28 -- # cat 00:24:38.933 07:43:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.933 07:43:42 -- target/shutdown.sh@28 -- # cat 00:24:38.933 07:43:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.933 07:43:42 -- target/shutdown.sh@28 -- # cat 00:24:38.933 07:43:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.933 07:43:42 -- target/shutdown.sh@28 -- # cat 00:24:38.933 07:43:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.933 07:43:42 -- target/shutdown.sh@28 -- # cat 00:24:38.933 07:43:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.933 07:43:42 -- target/shutdown.sh@28 -- # cat 00:24:38.933 07:43:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.933 07:43:42 -- target/shutdown.sh@28 -- # cat 00:24:38.933 07:43:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:38.933 07:43:42 -- target/shutdown.sh@28 -- # cat 00:24:38.933 07:43:42 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:38.933 07:43:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:38.933 07:43:42 -- common/autotest_common.sh@10 -- # set +x 00:24:38.933 Malloc1 00:24:38.933 [2024-10-07 07:43:42.762731] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.933 Malloc2 00:24:38.933 Malloc3 00:24:38.933 Malloc4 00:24:39.191 Malloc5 00:24:39.191 Malloc6 00:24:39.191 Malloc7 00:24:39.191 Malloc8 00:24:39.191 Malloc9 00:24:39.191 Malloc10 00:24:39.191 07:43:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:39.191 07:43:43 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:39.191 07:43:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:39.191 07:43:43 -- common/autotest_common.sh@10 -- # set +x 00:24:39.451 07:43:43 -- target/shutdown.sh@124 -- # perfpid=24741 00:24:39.451 07:43:43 -- target/shutdown.sh@125 -- # waitforlisten 24741 /var/tmp/bdevperf.sock 00:24:39.451 07:43:43 -- common/autotest_common.sh@819 -- # '[' -z 24741 ']' 00:24:39.451 07:43:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.451 07:43:43 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:39.451 07:43:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:39.451 07:43:43 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:39.451 07:43:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.451 07:43:43 -- nvmf/common.sh@520 -- # config=() 00:24:39.451 07:43:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:39.451 07:43:43 -- nvmf/common.sh@520 -- # local subsystem config 00:24:39.451 07:43:43 -- common/autotest_common.sh@10 -- # set +x 00:24:39.451 07:43:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.451 07:43:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.451 { 00:24:39.451 "params": { 00:24:39.451 "name": "Nvme$subsystem", 00:24:39.451 "trtype": "$TEST_TRANSPORT", 00:24:39.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.451 "adrfam": "ipv4", 00:24:39.451 "trsvcid": "$NVMF_PORT", 00:24:39.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.451 "hdgst": ${hdgst:-false}, 00:24:39.451 "ddgst": ${ddgst:-false} 00:24:39.451 }, 00:24:39.451 "method": "bdev_nvme_attach_controller" 00:24:39.451 } 00:24:39.451 EOF 00:24:39.451 )") 00:24:39.451 07:43:43 -- nvmf/common.sh@542 -- # cat 00:24:39.451 07:43:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.451 07:43:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.451 { 00:24:39.451 "params": { 00:24:39.451 "name": "Nvme$subsystem", 00:24:39.451 "trtype": "$TEST_TRANSPORT", 00:24:39.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.451 "adrfam": "ipv4", 00:24:39.451 "trsvcid": "$NVMF_PORT", 00:24:39.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.451 "hdgst": ${hdgst:-false}, 00:24:39.451 "ddgst": ${ddgst:-false} 00:24:39.451 }, 00:24:39.451 "method": "bdev_nvme_attach_controller" 00:24:39.451 } 00:24:39.451 EOF 00:24:39.451 )") 00:24:39.451 07:43:43 -- nvmf/common.sh@542 -- # cat 00:24:39.451 07:43:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.451 07:43:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.451 { 00:24:39.451 "params": { 00:24:39.451 "name": "Nvme$subsystem", 00:24:39.451 "trtype": "$TEST_TRANSPORT", 00:24:39.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.451 "adrfam": "ipv4", 00:24:39.451 "trsvcid": "$NVMF_PORT", 00:24:39.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.451 "hdgst": ${hdgst:-false}, 00:24:39.451 "ddgst": ${ddgst:-false} 00:24:39.451 }, 00:24:39.451 "method": "bdev_nvme_attach_controller" 00:24:39.451 } 00:24:39.451 EOF 00:24:39.451 )") 00:24:39.451 07:43:43 -- nvmf/common.sh@542 -- # cat 00:24:39.451 07:43:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.451 07:43:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.451 { 00:24:39.451 "params": { 00:24:39.451 "name": "Nvme$subsystem", 00:24:39.451 "trtype": "$TEST_TRANSPORT", 00:24:39.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.451 "adrfam": "ipv4", 00:24:39.451 "trsvcid": "$NVMF_PORT", 00:24:39.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.451 "hdgst": ${hdgst:-false}, 00:24:39.451 "ddgst": ${ddgst:-false} 00:24:39.451 }, 00:24:39.451 "method": "bdev_nvme_attach_controller" 00:24:39.451 } 00:24:39.451 EOF 00:24:39.452 )") 00:24:39.452 07:43:43 -- nvmf/common.sh@542 -- # cat 00:24:39.452 07:43:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.452 07:43:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.452 { 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme$subsystem", 00:24:39.452 "trtype": "$TEST_TRANSPORT", 00:24:39.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.452 "adrfam": "ipv4", 00:24:39.452 "trsvcid": "$NVMF_PORT", 00:24:39.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.452 "hdgst": ${hdgst:-false}, 00:24:39.452 "ddgst": ${ddgst:-false} 00:24:39.452 }, 00:24:39.452 "method": "bdev_nvme_attach_controller" 00:24:39.452 } 00:24:39.452 EOF 00:24:39.452 )") 00:24:39.452 07:43:43 -- nvmf/common.sh@542 -- # cat 00:24:39.452 07:43:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.452 07:43:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.452 { 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme$subsystem", 00:24:39.452 "trtype": "$TEST_TRANSPORT", 00:24:39.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.452 "adrfam": "ipv4", 00:24:39.452 "trsvcid": "$NVMF_PORT", 00:24:39.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.452 "hdgst": ${hdgst:-false}, 00:24:39.452 "ddgst": ${ddgst:-false} 00:24:39.452 }, 00:24:39.452 "method": "bdev_nvme_attach_controller" 00:24:39.452 } 00:24:39.452 EOF 00:24:39.452 )") 00:24:39.452 07:43:43 -- nvmf/common.sh@542 -- # cat 00:24:39.452 07:43:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.452 07:43:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.452 { 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme$subsystem", 00:24:39.452 "trtype": "$TEST_TRANSPORT", 00:24:39.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.452 "adrfam": "ipv4", 00:24:39.452 "trsvcid": "$NVMF_PORT", 00:24:39.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.452 "hdgst": ${hdgst:-false}, 00:24:39.452 "ddgst": ${ddgst:-false} 00:24:39.452 }, 00:24:39.452 "method": "bdev_nvme_attach_controller" 00:24:39.452 } 00:24:39.452 EOF 00:24:39.452 )") 00:24:39.452 [2024-10-07 07:43:43.236109] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:39.452 [2024-10-07 07:43:43.236173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid24741 ] 00:24:39.452 07:43:43 -- nvmf/common.sh@542 -- # cat 00:24:39.452 07:43:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.452 07:43:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.452 { 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme$subsystem", 00:24:39.452 "trtype": "$TEST_TRANSPORT", 00:24:39.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.452 "adrfam": "ipv4", 00:24:39.452 "trsvcid": "$NVMF_PORT", 00:24:39.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.452 "hdgst": ${hdgst:-false}, 00:24:39.452 "ddgst": ${ddgst:-false} 00:24:39.452 }, 00:24:39.452 "method": "bdev_nvme_attach_controller" 00:24:39.452 } 00:24:39.452 EOF 00:24:39.452 )") 00:24:39.452 07:43:43 -- nvmf/common.sh@542 -- # cat 00:24:39.452 07:43:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.452 07:43:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.452 { 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme$subsystem", 00:24:39.452 "trtype": "$TEST_TRANSPORT", 00:24:39.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.452 "adrfam": "ipv4", 00:24:39.452 "trsvcid": "$NVMF_PORT", 00:24:39.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.452 "hdgst": ${hdgst:-false}, 00:24:39.452 "ddgst": ${ddgst:-false} 00:24:39.452 }, 00:24:39.452 "method": "bdev_nvme_attach_controller" 00:24:39.452 } 00:24:39.452 EOF 00:24:39.452 )") 00:24:39.452 07:43:43 -- nvmf/common.sh@542 -- # cat 00:24:39.452 07:43:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.452 07:43:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.452 { 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme$subsystem", 00:24:39.452 "trtype": "$TEST_TRANSPORT", 00:24:39.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.452 "adrfam": "ipv4", 00:24:39.452 "trsvcid": "$NVMF_PORT", 00:24:39.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.452 "hdgst": ${hdgst:-false}, 00:24:39.452 "ddgst": ${ddgst:-false} 00:24:39.452 }, 00:24:39.452 "method": "bdev_nvme_attach_controller" 00:24:39.452 } 00:24:39.452 EOF 00:24:39.452 )") 00:24:39.452 07:43:43 -- nvmf/common.sh@542 -- # cat 00:24:39.452 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.452 07:43:43 -- nvmf/common.sh@544 -- # jq . 00:24:39.452 07:43:43 -- nvmf/common.sh@545 -- # IFS=, 00:24:39.452 07:43:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme1", 00:24:39.452 "trtype": "tcp", 00:24:39.452 "traddr": "10.0.0.2", 00:24:39.452 "adrfam": "ipv4", 00:24:39.452 "trsvcid": "4420", 00:24:39.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:39.452 "hdgst": false, 00:24:39.452 "ddgst": false 00:24:39.452 }, 00:24:39.452 "method": "bdev_nvme_attach_controller" 00:24:39.452 },{ 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme2", 00:24:39.452 "trtype": "tcp", 00:24:39.452 "traddr": "10.0.0.2", 00:24:39.452 "adrfam": "ipv4", 00:24:39.452 "trsvcid": "4420", 00:24:39.452 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:39.452 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:39.452 "hdgst": false, 00:24:39.452 "ddgst": false 00:24:39.452 }, 00:24:39.452 "method": "bdev_nvme_attach_controller" 00:24:39.452 },{ 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme3", 00:24:39.452 "trtype": "tcp", 00:24:39.452 "traddr": "10.0.0.2", 00:24:39.452 "adrfam": "ipv4", 00:24:39.452 "trsvcid": "4420", 00:24:39.452 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:39.452 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:39.452 "hdgst": false, 00:24:39.452 "ddgst": false 00:24:39.452 }, 00:24:39.452 "method": "bdev_nvme_attach_controller" 00:24:39.452 },{ 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme4", 00:24:39.452 "trtype": "tcp", 00:24:39.452 "traddr": "10.0.0.2", 00:24:39.452 "adrfam": "ipv4", 00:24:39.452 "trsvcid": "4420", 00:24:39.452 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:39.452 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:39.452 "hdgst": false, 00:24:39.452 "ddgst": false 00:24:39.452 }, 00:24:39.452 "method": "bdev_nvme_attach_controller" 00:24:39.452 },{ 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme5", 00:24:39.452 "trtype": "tcp", 00:24:39.452 "traddr": "10.0.0.2", 00:24:39.452 "adrfam": "ipv4", 00:24:39.452 "trsvcid": "4420", 00:24:39.452 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:39.452 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:39.452 "hdgst": false, 00:24:39.452 "ddgst": false 00:24:39.452 }, 00:24:39.452 "method": "bdev_nvme_attach_controller" 00:24:39.452 },{ 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme6", 00:24:39.452 "trtype": "tcp", 00:24:39.452 "traddr": "10.0.0.2", 00:24:39.452 "adrfam": "ipv4", 00:24:39.452 "trsvcid": "4420", 00:24:39.452 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:39.452 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:39.452 "hdgst": false, 00:24:39.452 "ddgst": false 00:24:39.452 }, 00:24:39.452 "method": "bdev_nvme_attach_controller" 00:24:39.452 },{ 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme7", 00:24:39.452 "trtype": "tcp", 00:24:39.452 "traddr": "10.0.0.2", 00:24:39.452 "adrfam": "ipv4", 00:24:39.452 "trsvcid": "4420", 00:24:39.452 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:39.452 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:39.452 "hdgst": false, 00:24:39.452 "ddgst": false 00:24:39.452 }, 00:24:39.452 "method": "bdev_nvme_attach_controller" 00:24:39.452 },{ 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme8", 00:24:39.452 "trtype": "tcp", 00:24:39.452 "traddr": "10.0.0.2", 00:24:39.452 "adrfam": "ipv4", 00:24:39.452 "trsvcid": "4420", 00:24:39.452 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:39.452 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:39.452 "hdgst": false, 00:24:39.452 "ddgst": false 00:24:39.452 }, 00:24:39.452 "method": "bdev_nvme_attach_controller" 00:24:39.452 },{ 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme9", 00:24:39.452 "trtype": "tcp", 00:24:39.452 "traddr": "10.0.0.2", 00:24:39.452 "adrfam": "ipv4", 00:24:39.452 "trsvcid": "4420", 00:24:39.452 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:39.452 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:39.452 "hdgst": false, 00:24:39.452 "ddgst": false 00:24:39.452 }, 00:24:39.452 "method": "bdev_nvme_attach_controller" 00:24:39.452 },{ 00:24:39.452 "params": { 00:24:39.452 "name": "Nvme10", 00:24:39.452 "trtype": "tcp", 00:24:39.452 "traddr": "10.0.0.2", 00:24:39.452 "adrfam": "ipv4", 00:24:39.453 "trsvcid": "4420", 00:24:39.453 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:39.453 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:39.453 "hdgst": false, 00:24:39.453 "ddgst": false 00:24:39.453 }, 00:24:39.453 "method": "bdev_nvme_attach_controller" 00:24:39.453 }' 00:24:39.453 [2024-10-07 07:43:43.294050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.453 [2024-10-07 07:43:43.363906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.360 Running I/O for 10 seconds... 00:24:41.623 07:43:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:41.623 07:43:45 -- common/autotest_common.sh@852 -- # return 0 00:24:41.623 07:43:45 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:41.623 07:43:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.623 07:43:45 -- common/autotest_common.sh@10 -- # set +x 00:24:41.623 07:43:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.623 07:43:45 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:41.623 07:43:45 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:41.623 07:43:45 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:41.623 07:43:45 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:41.623 07:43:45 -- target/shutdown.sh@57 -- # local ret=1 00:24:41.623 07:43:45 -- target/shutdown.sh@58 -- # local i 00:24:41.623 07:43:45 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:41.623 07:43:45 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:41.623 07:43:45 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:41.623 07:43:45 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:41.623 07:43:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.623 07:43:45 -- common/autotest_common.sh@10 -- # set +x 00:24:41.623 07:43:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.623 07:43:45 -- target/shutdown.sh@60 -- # read_io_count=167 00:24:41.623 07:43:45 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:24:41.623 07:43:45 -- target/shutdown.sh@64 -- # ret=0 00:24:41.623 07:43:45 -- target/shutdown.sh@65 -- # break 00:24:41.623 07:43:45 -- target/shutdown.sh@69 -- # return 0 00:24:41.623 07:43:45 -- target/shutdown.sh@134 -- # killprocess 24450 00:24:41.623 07:43:45 -- common/autotest_common.sh@926 -- # '[' -z 24450 ']' 00:24:41.624 07:43:45 -- common/autotest_common.sh@930 -- # kill -0 24450 00:24:41.624 07:43:45 -- common/autotest_common.sh@931 -- # uname 00:24:41.624 07:43:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:41.624 07:43:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 24450 00:24:41.624 07:43:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:41.624 07:43:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:41.624 07:43:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 24450' 00:24:41.624 killing process with pid 24450 00:24:41.624 07:43:45 -- common/autotest_common.sh@945 -- # kill 24450 00:24:41.624 07:43:45 -- common/autotest_common.sh@950 -- # wait 24450 00:24:41.624 [2024-10-07 07:43:45.573861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.573915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.573922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.573930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.573936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.573943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.573949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.573962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.573969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.573975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.573981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.573987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.573992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.573998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.574307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0faa0 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.624 [2024-10-07 07:43:45.575701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.575996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9f40 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.625 [2024-10-07 07:43:45.577386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.577547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ff30 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.579204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.626 [2024-10-07 07:43:45.579236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.579246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.626 [2024-10-07 07:43:45.579253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.579260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.626 [2024-10-07 07:43:45.579267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.579275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.626 [2024-10-07 07:43:45.579281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.579292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17efae0 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.579346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.626 [2024-10-07 07:43:45.579355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.579362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.626 [2024-10-07 07:43:45.579368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.579376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.626 [2024-10-07 07:43:45.579383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.579390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.626 [2024-10-07 07:43:45.579396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.579402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a20e0 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.579436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.626 [2024-10-07 07:43:45.579445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.579453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.626 [2024-10-07 07:43:45.579460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.579467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.626 [2024-10-07 07:43:45.579473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.579481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.626 [2024-10-07 07:43:45.579487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.579493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868b40 is same with the state(5) to be set 00:24:41.626 [2024-10-07 07:43:45.580796] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:41.626 [2024-10-07 07:43:45.580858] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:41.626 [2024-10-07 07:43:45.581337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.626 [2024-10-07 07:43:45.581357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.581373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.626 [2024-10-07 07:43:45.581381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.581397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.626 [2024-10-07 07:43:45.581404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.581416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.626 [2024-10-07 07:43:45.581422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.581430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.626 [2024-10-07 07:43:45.581437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.581445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.626 [2024-10-07 07:43:45.581451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.581460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.626 [2024-10-07 07:43:45.581467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.581476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.626 [2024-10-07 07:43:45.581482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.581491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.626 [2024-10-07 07:43:45.581497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.581505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.626 [2024-10-07 07:43:45.581511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.581519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.626 [2024-10-07 07:43:45.581526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.581534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.626 [2024-10-07 07:43:45.581540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.581548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.626 [2024-10-07 07:43:45.581555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.626 [2024-10-07 07:43:45.581563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.626 [2024-10-07 07:43:45.581570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.581990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.581996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.582004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.582010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.582018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.582024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.582032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.582039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.582047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.582053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.582069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.582076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.582084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.582091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.582099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.582105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.582115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.582121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.582130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.582136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.582144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.627 [2024-10-07 07:43:45.582151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.627 [2024-10-07 07:43:45.582161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.628 [2024-10-07 07:43:45.582167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.628 [2024-10-07 07:43:45.582176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.628 [2024-10-07 07:43:45.582182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.628 [2024-10-07 07:43:45.582190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.628 [2024-10-07 07:43:45.582197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.628 [2024-10-07 07:43:45.582204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.628 [2024-10-07 07:43:45.582211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.628 [2024-10-07 07:43:45.582219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.628 [2024-10-07 07:43:45.582227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.628 [2024-10-07 07:43:45.582235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.628 [2024-10-07 07:43:45.582242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.628 [2024-10-07 07:43:45.582249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.628 [2024-10-07 07:43:45.582256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.628 [2024-10-07 07:43:45.582263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.628 [2024-10-07 07:43:45.582272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.628 [2024-10-07 07:43:45.582281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.628 [2024-10-07 07:43:45.582287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.628 [2024-10-07 07:43:45.582295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.628 [2024-10-07 07:43:45.582302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.628 [2024-10-07 07:43:45.582310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.628 [2024-10-07 07:43:45.582316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.628 [2024-10-07 07:43:45.582323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1754c00 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.628 [2024-10-07 07:43:45.582663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582746] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1754c00 was disconnected and freed. reset controller. 00:24:41.629 [2024-10-07 07:43:45.582750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.582765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa103e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584176] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controll[2024-10-07 07:43:45.584186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with ter 00:24:41.629 he state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17efae0 (9): Bad file descriptor 00:24:41.629 [2024-10-07 07:43:45.584214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584273] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:41.629 [2024-10-07 07:43:45.584278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.584578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8718e0 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.585435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.585457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.629 [2024-10-07 07:43:45.585465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.585845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871d90 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.586238] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:41.630 [2024-10-07 07:43:45.586869] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:41.630 [2024-10-07 07:43:45.586926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.586941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.586947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.586954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.586961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.586967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.586974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.586980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.586987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.586993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.586999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.587005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.587012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.587019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.587025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.587032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.587038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.587045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.587051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.587057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.587067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.630 [2024-10-07 07:43:45.587074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872240 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.587678] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:41.631 [2024-10-07 07:43:45.588417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588514] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:41.631 [2024-10-07 07:43:45.588526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.631 [2024-10-07 07:43:45.588601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.896 [2024-10-07 07:43:45.589312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1606990 is same with the state(5) to be set 00:24:41.896 [2024-10-07 07:43:45.589414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a4e40 is same with the state(5) to be set 00:24:41.896 [2024-10-07 07:43:45.589491] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a20e0 (9): Bad file descriptor 00:24:41.896 [2024-10-07 07:43:45.589517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da640 is same with the state(5) to be set 00:24:41.896 [2024-10-07 07:43:45.589605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868000 is same with the state(5) to be set 00:24:41.896 [2024-10-07 07:43:45.589674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1868b40 (9): Bad file descriptor 00:24:41.896 [2024-10-07 07:43:45.589697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.896 [2024-10-07 07:43:45.589750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.589756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5190 is same with the state(5) to be set 00:24:41.896 [2024-10-07 07:43:45.599335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1606990 (9): Bad file descriptor 00:24:41.896 [2024-10-07 07:43:45.599364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a4e40 (9): Bad file descriptor 00:24:41.896 [2024-10-07 07:43:45.599383] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16da640 (9): Bad file descriptor 00:24:41.896 [2024-10-07 07:43:45.599398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1868000 (9): Bad file descriptor 00:24:41.896 [2024-10-07 07:43:45.599416] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c5190 (9): Bad file descriptor 00:24:41.896 [2024-10-07 07:43:45.599523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.896 [2024-10-07 07:43:45.599534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.599546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.896 [2024-10-07 07:43:45.599553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.599562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.896 [2024-10-07 07:43:45.599569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.599577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.896 [2024-10-07 07:43:45.599586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.896 [2024-10-07 07:43:45.599595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.599993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.599999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.600008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.600015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.600025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.600034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.600042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.600048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.600057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.600068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.600077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.600084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.600092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.600099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.600108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.600115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.600123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.600130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.600139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.600146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.600154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.600160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.600169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.600176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.600183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.600190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.897 [2024-10-07 07:43:45.600198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.897 [2024-10-07 07:43:45.600202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.897 [2024-10-07 07:43:45.600205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-07 07:43:45.600243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 he state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33152 len:12[2024-10-07 07:43:45.600340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 he state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-07 07:43:45.600351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 he state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33792 len:12[2024-10-07 07:43:45.600432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 he state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8726f0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.600478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.898 [2024-10-07 07:43:45.600555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.898 [2024-10-07 07:43:45.600563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17510a0 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.601267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.601289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.601296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.601303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.601314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.898 [2024-10-07 07:43:45.601320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x872b80 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with t[2024-10-07 07:43:45.601748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25984 len:128he state(5) to be set 00:24:41.899 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with t[2024-10-07 07:43:45.601759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:24:41.899 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26496 len:12[2024-10-07 07:43:45.601789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 he state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-07 07:43:45.601797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 he state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with t[2024-10-07 07:43:45.601844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27008 len:12he state(5) to be set 00:24:41.899 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f9a90 is same with the state(5) to be set 00:24:41.899 [2024-10-07 07:43:45.601882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.899 [2024-10-07 07:43:45.601921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.899 [2024-10-07 07:43:45.601931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.601937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.601946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.601952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.601961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.601967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.601975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.601982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.601991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.601998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.602330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.602338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.607921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.607933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.607943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.607951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.607959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.607972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.607979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.607988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.607994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.608003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.608010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.608019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.608026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.900 [2024-10-07 07:43:45.608034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.900 [2024-10-07 07:43:45.608041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.608049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.608056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.608067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.608076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.608084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.608091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.608099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.608106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.608114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.608121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.608130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.608136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.608145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.608152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.608159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd870 is same with the state(5) to be set 00:24:41.901 [2024-10-07 07:43:45.609118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.901 [2024-10-07 07:43:45.609628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.901 [2024-10-07 07:43:45.609636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.609987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.609996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.610002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.610011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.610018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.610027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.610034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.610042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.610049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.610062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.610069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.610078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.610084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.610093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.610100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.610108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.610116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.610125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.610132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.610193] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18fad10 was disconnected and freed. reset controller. 00:24:41.902 [2024-10-07 07:43:45.610228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.610237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.610247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.610255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.610263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1752010 is same with the state(5) to be set 00:24:41.902 [2024-10-07 07:43:45.610315] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1752010 was disconnected and freed. reset controller. 00:24:41.902 [2024-10-07 07:43:45.610341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.610349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.610360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.902 [2024-10-07 07:43:45.610370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.902 [2024-10-07 07:43:45.610378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17535f0 is same with the state(5) to be set 00:24:41.902 [2024-10-07 07:43:45.610427] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17535f0 was disconnected and freed. reset controller. 00:24:41.902 [2024-10-07 07:43:45.610441] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.902 [2024-10-07 07:43:45.610451] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:41.902 [2024-10-07 07:43:45.610498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:41.903 [2024-10-07 07:43:45.610507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.610514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17efae0 is same with the state(5) to be set 00:24:41.903 [2024-10-07 07:43:45.610535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17efae0 (9): Bad file descriptor 00:24:41.903 [2024-10-07 07:43:45.610559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.903 [2024-10-07 07:43:45.610567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.610575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.903 [2024-10-07 07:43:45.610583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.610590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.903 [2024-10-07 07:43:45.610596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.610604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.903 [2024-10-07 07:43:45.610611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.610618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fd470 is same with the state(5) to be set 00:24:41.903 [2024-10-07 07:43:45.610637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.903 [2024-10-07 07:43:45.610645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.610653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.903 [2024-10-07 07:43:45.610659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.610668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.903 [2024-10-07 07:43:45.610675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.610682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.903 [2024-10-07 07:43:45.610689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.610696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175af00 is same with the state(5) to be set 00:24:41.903 [2024-10-07 07:43:45.610721] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:41.903 [2024-10-07 07:43:45.613366] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:41.903 [2024-10-07 07:43:45.613391] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:41.903 [2024-10-07 07:43:45.613404] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175af00 (9): Bad file descriptor 00:24:41.903 [2024-10-07 07:43:45.613743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.903 [2024-10-07 07:43:45.613985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.903 [2024-10-07 07:43:45.614000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a20e0 with addr=10.0.0.2, port=4420 00:24:41.903 [2024-10-07 07:43:45.614011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a20e0 is same with the state(5) to be set 00:24:41.903 [2024-10-07 07:43:45.614161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.903 [2024-10-07 07:43:45.614356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.903 [2024-10-07 07:43:45.614370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1868b40 with addr=10.0.0.2, port=4420 00:24:41.903 [2024-10-07 07:43:45.614379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868b40 is same with the state(5) to be set 00:24:41.903 [2024-10-07 07:43:45.614958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.614973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.614988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.614998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.903 [2024-10-07 07:43:45.615449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.903 [2024-10-07 07:43:45.615458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.615983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.615992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.616002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.616012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.616022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.616030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.616041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.616051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.616065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.616074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.616084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.616092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.616107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.616114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.616125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.616135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.616146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.616155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.616165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.616174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.616184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.616192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.616203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.616211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.616222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.616232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.904 [2024-10-07 07:43:45.616242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.904 [2024-10-07 07:43:45.616250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.616259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dedc0 is same with the state(5) to be set 00:24:41.905 [2024-10-07 07:43:45.617445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.617987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.617997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.618005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.618018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.618027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.618037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.618047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.618065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.618074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.618084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.905 [2024-10-07 07:43:45.618093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.905 [2024-10-07 07:43:45.618105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.618703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.618713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e0360 is same with the state(5) to be set 00:24:41.906 [2024-10-07 07:43:45.619869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.619885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.619898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.619908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.619919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.619929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.619941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.619950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.619965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.619975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.619985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.619995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.620006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.620015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.620026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.620035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.906 [2024-10-07 07:43:45.620046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.906 [2024-10-07 07:43:45.620055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.907 [2024-10-07 07:43:45.620844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.907 [2024-10-07 07:43:45.620855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.620864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.620874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.620883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.620894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.620903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.620913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.620921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.620932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.620940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.620951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.620960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.620972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.620981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.620991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.621000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.621010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.621018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.621029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.621037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.621047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.621056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.621073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.621083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.621094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.621102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.621112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.621121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.621132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.621140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.621150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.621159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.621169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8150 is same with the state(5) to be set 00:24:41.908 [2024-10-07 07:43:45.622271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.908 [2024-10-07 07:43:45.622636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.908 [2024-10-07 07:43:45.622645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.622988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.622996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.623004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.623011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.623020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.623026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.623034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.623040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.623048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.623055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.623068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.623075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.623083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.623091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.623101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.623109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.623118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.623125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.623133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.623140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.623148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.623155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.623163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.909 [2024-10-07 07:43:45.623171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.909 [2024-10-07 07:43:45.623179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.910 [2024-10-07 07:43:45.623186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.910 [2024-10-07 07:43:45.623196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.910 [2024-10-07 07:43:45.623203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.910 [2024-10-07 07:43:45.623211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.910 [2024-10-07 07:43:45.623219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.910 [2024-10-07 07:43:45.623228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.910 [2024-10-07 07:43:45.623235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.910 [2024-10-07 07:43:45.623243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.910 [2024-10-07 07:43:45.623252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.910 [2024-10-07 07:43:45.623260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.910 [2024-10-07 07:43:45.623268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.910 [2024-10-07 07:43:45.623276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.910 [2024-10-07 07:43:45.623283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.910 [2024-10-07 07:43:45.623290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9730 is same with the state(5) to be set 00:24:41.910 [2024-10-07 07:43:45.624248] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:41.910 [2024-10-07 07:43:45.624266] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:41.910 [2024-10-07 07:43:45.624275] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:41.910 [2024-10-07 07:43:45.624284] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:41.910 [2024-10-07 07:43:45.624311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fd470 (9): Bad file descriptor 00:24:41.910 [2024-10-07 07:43:45.624617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.910 [2024-10-07 07:43:45.624871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.910 [2024-10-07 07:43:45.624883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16da640 with addr=10.0.0.2, port=4420 00:24:41.910 [2024-10-07 07:43:45.624892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16da640 is same with the state(5) to be set 00:24:41.910 [2024-10-07 07:43:45.624907] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a20e0 (9): Bad file descriptor 00:24:41.910 [2024-10-07 07:43:45.624917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1868b40 (9): Bad file descriptor 00:24:41.910 [2024-10-07 07:43:45.624926] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:41.910 [2024-10-07 07:43:45.624933] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:41.910 [2024-10-07 07:43:45.624941] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:41.910 [2024-10-07 07:43:45.624975] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:41.910 [2024-10-07 07:43:45.624993] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:41.910 [2024-10-07 07:43:45.625005] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:41.910 [2024-10-07 07:43:45.625014] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:41.910 [2024-10-07 07:43:45.625024] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16da640 (9): Bad file descriptor 00:24:41.910 task offset: 25600 on job bdev=Nvme10n1 fails 00:24:41.910 00:24:41.910 Latency(us) 00:24:41.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.910 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.910 Job: Nvme1n1 ended in about 0.53 seconds with error 00:24:41.910 Verification LBA range: start 0x0 length 0x400 00:24:41.910 Nvme1n1 : 0.53 395.60 24.73 121.72 0.00 122703.77 58171.00 125329.80 00:24:41.910 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.910 Job: Nvme2n1 ended in about 0.53 seconds with error 00:24:41.910 Verification LBA range: start 0x0 length 0x400 00:24:41.910 Nvme2n1 : 0.53 389.98 24.37 119.99 0.00 123036.96 76895.57 97367.77 00:24:41.910 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.910 Job: Nvme3n1 ended in about 0.54 seconds with error 00:24:41.910 Verification LBA range: start 0x0 length 0x400 00:24:41.910 Nvme3n1 : 0.54 384.13 24.01 118.19 0.00 123424.30 77394.90 99365.06 00:24:41.910 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.910 Job: Nvme4n1 ended in about 0.54 seconds with error 00:24:41.910 Verification LBA range: start 0x0 length 0x400 00:24:41.910 Nvme4n1 : 0.54 382.40 23.90 117.66 0.00 122552.32 78393.54 95869.81 00:24:41.910 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.910 Job: Nvme5n1 ended in about 0.55 seconds with error 00:24:41.910 Verification LBA range: start 0x0 length 0x400 00:24:41.910 Nvme5n1 : 0.55 380.68 23.79 117.13 0.00 121661.98 79392.18 94371.84 00:24:41.910 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.910 Job: Nvme6n1 ended in about 0.55 seconds with error 00:24:41.910 Verification LBA range: start 0x0 length 0x400 00:24:41.910 Nvme6n1 : 0.55 379.25 23.70 116.69 0.00 120635.81 71403.03 94871.16 00:24:41.910 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.910 Job: Nvme7n1 ended in about 0.54 seconds with error 00:24:41.910 Verification LBA range: start 0x0 length 0x400 00:24:41.910 Nvme7n1 : 0.54 388.09 24.26 119.41 0.00 116219.01 50930.83 97867.09 00:24:41.910 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.910 Job: Nvme8n1 ended in about 0.54 seconds with error 00:24:41.910 Verification LBA range: start 0x0 length 0x400 00:24:41.910 Nvme8n1 : 0.54 503.12 31.44 3.73 0.00 110962.99 4119.41 99864.38 00:24:41.910 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.910 Job: Nvme9n1 ended in about 0.54 seconds with error 00:24:41.910 Verification LBA range: start 0x0 length 0x400 00:24:41.910 Nvme9n1 : 0.54 502.34 31.40 3.72 0.00 109676.25 4681.14 100863.02 00:24:41.910 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.910 Job: Nvme10n1 ended in about 0.51 seconds with error 00:24:41.910 Verification LBA range: start 0x0 length 0x400 00:24:41.910 Nvme10n1 : 0.51 344.12 21.51 125.85 0.00 119931.79 4337.86 102360.99 00:24:41.910 =================================================================================================================== 00:24:41.910 Total : 4049.70 253.11 964.10 0.00 119070.06 4119.41 125329.80 00:24:41.910 [2024-10-07 07:43:45.650140] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:41.910 [2024-10-07 07:43:45.650187] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:41.910 [2024-10-07 07:43:45.650206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.910 [2024-10-07 07:43:45.650561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.910 [2024-10-07 07:43:45.650817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.910 [2024-10-07 07:43:45.650830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175af00 with addr=10.0.0.2, port=4420 00:24:41.910 [2024-10-07 07:43:45.650841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175af00 is same with the state(5) to be set 00:24:41.910 [2024-10-07 07:43:45.651090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.910 [2024-10-07 07:43:45.651370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.910 [2024-10-07 07:43:45.651382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a4e40 with addr=10.0.0.2, port=4420 00:24:41.910 [2024-10-07 07:43:45.651390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a4e40 is same with the state(5) to be set 00:24:41.910 [2024-10-07 07:43:45.651661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.910 [2024-10-07 07:43:45.651859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.910 [2024-10-07 07:43:45.651871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1606990 with addr=10.0.0.2, port=4420 00:24:41.910 [2024-10-07 07:43:45.651879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1606990 is same with the state(5) to be set 00:24:41.910 [2024-10-07 07:43:45.652172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.910 [2024-10-07 07:43:45.652353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.910 [2024-10-07 07:43:45.652366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c5190 with addr=10.0.0.2, port=4420 00:24:41.910 [2024-10-07 07:43:45.652381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5190 is same with the state(5) to be set 00:24:41.910 [2024-10-07 07:43:45.652399] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.910 [2024-10-07 07:43:45.652407] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.910 [2024-10-07 07:43:45.652417] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.910 [2024-10-07 07:43:45.652433] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:41.910 [2024-10-07 07:43:45.652440] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:41.910 [2024-10-07 07:43:45.652448] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:41.910 [2024-10-07 07:43:45.653714] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.910 [2024-10-07 07:43:45.653735] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.910 [2024-10-07 07:43:45.654086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.910 [2024-10-07 07:43:45.654293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.911 [2024-10-07 07:43:45.654307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17fd470 with addr=10.0.0.2, port=4420 00:24:41.911 [2024-10-07 07:43:45.654316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fd470 is same with the state(5) to be set 00:24:41.911 [2024-10-07 07:43:45.654570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.911 [2024-10-07 07:43:45.654764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.911 [2024-10-07 07:43:45.654777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1868000 with addr=10.0.0.2, port=4420 00:24:41.911 [2024-10-07 07:43:45.654784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868000 is same with the state(5) to be set 00:24:41.911 [2024-10-07 07:43:45.654800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175af00 (9): Bad file descriptor 00:24:41.911 [2024-10-07 07:43:45.654812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a4e40 (9): Bad file descriptor 00:24:41.911 [2024-10-07 07:43:45.654822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1606990 (9): Bad file descriptor 00:24:41.911 [2024-10-07 07:43:45.654831] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c5190 (9): Bad file descriptor 00:24:41.911 [2024-10-07 07:43:45.654839] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:41.911 [2024-10-07 07:43:45.654846] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:41.911 [2024-10-07 07:43:45.654855] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:41.911 [2024-10-07 07:43:45.654900] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:41.911 [2024-10-07 07:43:45.654912] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:41.911 [2024-10-07 07:43:45.654922] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:41.911 [2024-10-07 07:43:45.654932] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:41.911 [2024-10-07 07:43:45.654941] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:41.911 [2024-10-07 07:43:45.655024] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.911 [2024-10-07 07:43:45.655048] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fd470 (9): Bad file descriptor 00:24:41.911 [2024-10-07 07:43:45.655064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1868000 (9): Bad file descriptor 00:24:41.911 [2024-10-07 07:43:45.655073] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:41.911 [2024-10-07 07:43:45.655080] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:41.911 [2024-10-07 07:43:45.655088] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:41.911 [2024-10-07 07:43:45.655097] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:41.911 [2024-10-07 07:43:45.655104] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:41.911 [2024-10-07 07:43:45.655111] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:41.911 [2024-10-07 07:43:45.655120] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:41.911 [2024-10-07 07:43:45.655126] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:41.911 [2024-10-07 07:43:45.655134] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:41.911 [2024-10-07 07:43:45.655144] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:41.911 [2024-10-07 07:43:45.655150] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:41.911 [2024-10-07 07:43:45.655157] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:41.911 [2024-10-07 07:43:45.655205] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:41.911 [2024-10-07 07:43:45.655220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:41.911 [2024-10-07 07:43:45.655228] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.911 [2024-10-07 07:43:45.655236] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.911 [2024-10-07 07:43:45.655243] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.911 [2024-10-07 07:43:45.655249] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.911 [2024-10-07 07:43:45.655255] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.911 [2024-10-07 07:43:45.655276] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:41.911 [2024-10-07 07:43:45.655283] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:41.911 [2024-10-07 07:43:45.655290] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:41.911 [2024-10-07 07:43:45.655299] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:41.911 [2024-10-07 07:43:45.655306] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:41.911 [2024-10-07 07:43:45.655313] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:41.911 [2024-10-07 07:43:45.655337] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.911 [2024-10-07 07:43:45.655343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.911 [2024-10-07 07:43:45.655642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.911 [2024-10-07 07:43:45.655914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.911 [2024-10-07 07:43:45.655925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17efae0 with addr=10.0.0.2, port=4420 00:24:41.911 [2024-10-07 07:43:45.655937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17efae0 is same with the state(5) to be set 00:24:41.911 [2024-10-07 07:43:45.656139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.911 [2024-10-07 07:43:45.656360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.911 [2024-10-07 07:43:45.656371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1868b40 with addr=10.0.0.2, port=4420 00:24:41.911 [2024-10-07 07:43:45.656379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1868b40 is same with the state(5) to be set 00:24:41.911 [2024-10-07 07:43:45.656582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.911 [2024-10-07 07:43:45.656760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.911 [2024-10-07 07:43:45.656771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a20e0 with addr=10.0.0.2, port=4420 00:24:41.911 [2024-10-07 07:43:45.656778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a20e0 is same with the state(5) to be set 00:24:41.911 [2024-10-07 07:43:45.656808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17efae0 (9): Bad file descriptor 00:24:41.911 [2024-10-07 07:43:45.656821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1868b40 (9): Bad file descriptor 00:24:41.911 [2024-10-07 07:43:45.656829] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a20e0 (9): Bad file descriptor 00:24:41.911 [2024-10-07 07:43:45.656857] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:41.911 [2024-10-07 07:43:45.656866] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:41.911 [2024-10-07 07:43:45.656873] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:41.911 [2024-10-07 07:43:45.656882] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:41.911 [2024-10-07 07:43:45.656889] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:41.911 [2024-10-07 07:43:45.656896] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:41.911 [2024-10-07 07:43:45.656905] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.911 [2024-10-07 07:43:45.656911] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.911 [2024-10-07 07:43:45.656918] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.911 [2024-10-07 07:43:45.656943] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.911 [2024-10-07 07:43:45.656949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.911 [2024-10-07 07:43:45.656956] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.170 07:43:46 -- target/shutdown.sh@135 -- # nvmfpid= 00:24:42.170 07:43:46 -- target/shutdown.sh@138 -- # sleep 1 00:24:43.107 07:43:47 -- target/shutdown.sh@141 -- # kill -9 24741 00:24:43.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (24741) - No such process 00:24:43.107 07:43:47 -- target/shutdown.sh@141 -- # true 00:24:43.107 07:43:47 -- target/shutdown.sh@143 -- # stoptarget 00:24:43.107 07:43:47 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:43.107 07:43:47 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:43.107 07:43:47 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:43.107 07:43:47 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:43.107 07:43:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:43.107 07:43:47 -- nvmf/common.sh@116 -- # sync 00:24:43.107 07:43:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:43.107 07:43:47 -- nvmf/common.sh@119 -- # set +e 00:24:43.107 07:43:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:43.107 07:43:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:43.107 rmmod nvme_tcp 00:24:43.366 rmmod nvme_fabrics 00:24:43.366 rmmod nvme_keyring 00:24:43.366 07:43:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:43.366 07:43:47 -- nvmf/common.sh@123 -- # set -e 00:24:43.366 07:43:47 -- nvmf/common.sh@124 -- # return 0 00:24:43.366 07:43:47 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:24:43.366 07:43:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:43.366 07:43:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:43.366 07:43:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:43.366 07:43:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.366 07:43:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:43.366 07:43:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.366 07:43:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.366 07:43:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.271 07:43:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:45.271 00:24:45.271 real 0m7.731s 00:24:45.271 user 0m19.161s 00:24:45.271 sys 0m1.239s 00:24:45.271 07:43:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.271 07:43:49 -- common/autotest_common.sh@10 -- # set +x 00:24:45.271 ************************************ 00:24:45.271 END TEST nvmf_shutdown_tc3 00:24:45.271 ************************************ 00:24:45.271 07:43:49 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:24:45.271 00:24:45.271 real 0m30.678s 00:24:45.271 user 1m17.287s 00:24:45.271 sys 0m8.176s 00:24:45.271 07:43:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.271 07:43:49 -- common/autotest_common.sh@10 -- # set +x 00:24:45.271 ************************************ 00:24:45.271 END TEST nvmf_shutdown 00:24:45.271 ************************************ 00:24:45.531 07:43:49 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:45.531 07:43:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:45.531 07:43:49 -- common/autotest_common.sh@10 -- # set +x 00:24:45.531 07:43:49 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:45.531 07:43:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:45.531 07:43:49 -- common/autotest_common.sh@10 -- # set +x 00:24:45.531 07:43:49 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:45.531 07:43:49 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:45.531 07:43:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:45.531 07:43:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:45.531 07:43:49 -- common/autotest_common.sh@10 -- # set +x 00:24:45.531 ************************************ 00:24:45.531 START TEST nvmf_multicontroller 00:24:45.531 ************************************ 00:24:45.531 07:43:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:45.531 * Looking for test storage... 00:24:45.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.531 07:43:49 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.531 07:43:49 -- nvmf/common.sh@7 -- # uname -s 00:24:45.531 07:43:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.531 07:43:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.531 07:43:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.531 07:43:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.531 07:43:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.531 07:43:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.531 07:43:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.531 07:43:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.531 07:43:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.531 07:43:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.531 07:43:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:45.531 07:43:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:45.531 07:43:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.531 07:43:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.531 07:43:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.531 07:43:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.531 07:43:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.531 07:43:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.531 07:43:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.531 07:43:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.531 07:43:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.531 07:43:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.531 07:43:49 -- paths/export.sh@5 -- # export PATH 00:24:45.531 07:43:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.531 07:43:49 -- nvmf/common.sh@46 -- # : 0 00:24:45.531 07:43:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:45.531 07:43:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:45.531 07:43:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:45.531 07:43:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.531 07:43:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.531 07:43:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:45.531 07:43:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:45.531 07:43:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:45.531 07:43:49 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:45.531 07:43:49 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:45.531 07:43:49 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:45.531 07:43:49 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:45.531 07:43:49 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:45.531 07:43:49 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:45.531 07:43:49 -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:45.531 07:43:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:45.531 07:43:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.531 07:43:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:45.531 07:43:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:45.531 07:43:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:45.531 07:43:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.531 07:43:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.531 07:43:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.531 07:43:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:45.531 07:43:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:45.531 07:43:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:45.531 07:43:49 -- common/autotest_common.sh@10 -- # set +x 00:24:50.813 07:43:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:50.813 07:43:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:50.813 07:43:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:50.813 07:43:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:50.813 07:43:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:50.813 07:43:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:50.813 07:43:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:50.813 07:43:54 -- nvmf/common.sh@294 -- # net_devs=() 00:24:50.813 07:43:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:50.813 07:43:54 -- nvmf/common.sh@295 -- # e810=() 00:24:50.813 07:43:54 -- nvmf/common.sh@295 -- # local -ga e810 00:24:50.813 07:43:54 -- nvmf/common.sh@296 -- # x722=() 00:24:50.813 07:43:54 -- nvmf/common.sh@296 -- # local -ga x722 00:24:50.813 07:43:54 -- nvmf/common.sh@297 -- # mlx=() 00:24:50.813 07:43:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:50.813 07:43:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.813 07:43:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.813 07:43:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.813 07:43:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.813 07:43:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.813 07:43:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.813 07:43:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.813 07:43:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.813 07:43:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.813 07:43:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.813 07:43:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.813 07:43:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:50.813 07:43:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:50.813 07:43:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:50.813 07:43:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:50.813 07:43:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:50.813 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:50.813 07:43:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:50.813 07:43:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:50.813 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:50.813 07:43:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:50.813 07:43:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:50.813 07:43:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.813 07:43:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:50.813 07:43:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.813 07:43:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:50.813 Found net devices under 0000:af:00.0: cvl_0_0 00:24:50.813 07:43:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.813 07:43:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:50.813 07:43:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.813 07:43:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:50.813 07:43:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.813 07:43:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:50.813 Found net devices under 0000:af:00.1: cvl_0_1 00:24:50.813 07:43:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.813 07:43:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:50.813 07:43:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:50.813 07:43:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:50.813 07:43:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:50.813 07:43:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.813 07:43:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.813 07:43:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.813 07:43:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:50.813 07:43:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.813 07:43:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.813 07:43:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:50.813 07:43:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.813 07:43:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.813 07:43:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:50.813 07:43:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:50.813 07:43:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.813 07:43:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.073 07:43:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.073 07:43:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.073 07:43:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:51.073 07:43:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.073 07:43:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.073 07:43:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.073 07:43:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:51.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:24:51.073 00:24:51.073 --- 10.0.0.2 ping statistics --- 00:24:51.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.073 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:24:51.073 07:43:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:24:51.073 00:24:51.073 --- 10.0.0.1 ping statistics --- 00:24:51.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.073 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:24:51.073 07:43:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.073 07:43:54 -- nvmf/common.sh@410 -- # return 0 00:24:51.073 07:43:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:51.073 07:43:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.073 07:43:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:51.073 07:43:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:51.073 07:43:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.073 07:43:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:51.073 07:43:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:51.073 07:43:55 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:51.073 07:43:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:51.073 07:43:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:51.073 07:43:55 -- common/autotest_common.sh@10 -- # set +x 00:24:51.073 07:43:55 -- nvmf/common.sh@469 -- # nvmfpid=28832 00:24:51.073 07:43:55 -- nvmf/common.sh@470 -- # waitforlisten 28832 00:24:51.073 07:43:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:51.073 07:43:55 -- common/autotest_common.sh@819 -- # '[' -z 28832 ']' 00:24:51.073 07:43:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.073 07:43:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:51.073 07:43:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.073 07:43:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:51.073 07:43:55 -- common/autotest_common.sh@10 -- # set +x 00:24:51.333 [2024-10-07 07:43:55.067969] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:51.333 [2024-10-07 07:43:55.068009] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.333 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.333 [2024-10-07 07:43:55.126238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:51.333 [2024-10-07 07:43:55.200041] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:51.333 [2024-10-07 07:43:55.200156] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.333 [2024-10-07 07:43:55.200164] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.333 [2024-10-07 07:43:55.200171] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.333 [2024-10-07 07:43:55.200267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.333 [2024-10-07 07:43:55.200358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.333 [2024-10-07 07:43:55.200359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.271 07:43:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:52.271 07:43:55 -- common/autotest_common.sh@852 -- # return 0 00:24:52.271 07:43:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:52.271 07:43:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:52.271 07:43:55 -- common/autotest_common.sh@10 -- # set +x 00:24:52.271 07:43:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.271 07:43:55 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:52.271 07:43:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.271 07:43:55 -- common/autotest_common.sh@10 -- # set +x 00:24:52.271 [2024-10-07 07:43:55.940342] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.271 07:43:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.271 07:43:55 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:52.271 07:43:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.271 07:43:55 -- common/autotest_common.sh@10 -- # set +x 00:24:52.272 Malloc0 00:24:52.272 07:43:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.272 07:43:55 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:52.272 07:43:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.272 07:43:55 -- common/autotest_common.sh@10 -- # set +x 00:24:52.272 07:43:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.272 07:43:55 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:52.272 07:43:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.272 07:43:55 -- common/autotest_common.sh@10 -- # set +x 00:24:52.272 07:43:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.272 07:43:56 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.272 07:43:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.272 07:43:56 -- common/autotest_common.sh@10 -- # set +x 00:24:52.272 [2024-10-07 07:43:56.012966] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.272 07:43:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.272 07:43:56 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:52.272 07:43:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.272 07:43:56 -- common/autotest_common.sh@10 -- # set +x 00:24:52.272 [2024-10-07 07:43:56.020879] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:52.272 07:43:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.272 07:43:56 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:52.272 07:43:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.272 07:43:56 -- common/autotest_common.sh@10 -- # set +x 00:24:52.272 Malloc1 00:24:52.272 07:43:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.272 07:43:56 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:52.272 07:43:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.272 07:43:56 -- common/autotest_common.sh@10 -- # set +x 00:24:52.272 07:43:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.272 07:43:56 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:52.272 07:43:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.272 07:43:56 -- common/autotest_common.sh@10 -- # set +x 00:24:52.272 07:43:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.272 07:43:56 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:52.272 07:43:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.272 07:43:56 -- common/autotest_common.sh@10 -- # set +x 00:24:52.272 07:43:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.272 07:43:56 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:52.272 07:43:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.272 07:43:56 -- common/autotest_common.sh@10 -- # set +x 00:24:52.272 07:43:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.272 07:43:56 -- host/multicontroller.sh@44 -- # bdevperf_pid=29074 00:24:52.272 07:43:56 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:52.272 07:43:56 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:52.272 07:43:56 -- host/multicontroller.sh@47 -- # waitforlisten 29074 /var/tmp/bdevperf.sock 00:24:52.272 07:43:56 -- common/autotest_common.sh@819 -- # '[' -z 29074 ']' 00:24:52.272 07:43:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:52.272 07:43:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:52.272 07:43:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:52.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:52.272 07:43:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:52.272 07:43:56 -- common/autotest_common.sh@10 -- # set +x 00:24:53.210 07:43:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:53.210 07:43:56 -- common/autotest_common.sh@852 -- # return 0 00:24:53.210 07:43:56 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:53.210 07:43:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:53.210 07:43:56 -- common/autotest_common.sh@10 -- # set +x 00:24:53.210 NVMe0n1 00:24:53.210 07:43:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:53.210 07:43:57 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:53.210 07:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:53.210 07:43:57 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:53.210 07:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:53.210 07:43:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:53.210 1 00:24:53.210 07:43:57 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:53.210 07:43:57 -- common/autotest_common.sh@640 -- # local es=0 00:24:53.210 07:43:57 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:53.210 07:43:57 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:24:53.210 07:43:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:53.210 07:43:57 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:24:53.210 07:43:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:53.210 07:43:57 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:53.210 07:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:53.210 07:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:53.210 request: 00:24:53.210 { 00:24:53.210 "name": "NVMe0", 00:24:53.210 "trtype": "tcp", 00:24:53.210 "traddr": "10.0.0.2", 00:24:53.210 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:53.210 "hostaddr": "10.0.0.2", 00:24:53.210 "hostsvcid": "60000", 00:24:53.210 "adrfam": "ipv4", 00:24:53.210 "trsvcid": "4420", 00:24:53.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.210 "method": "bdev_nvme_attach_controller", 00:24:53.210 "req_id": 1 00:24:53.210 } 00:24:53.210 Got JSON-RPC error response 00:24:53.210 response: 00:24:53.210 { 00:24:53.210 "code": -114, 00:24:53.210 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:53.210 } 00:24:53.210 07:43:57 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:24:53.210 07:43:57 -- common/autotest_common.sh@643 -- # es=1 00:24:53.210 07:43:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:53.210 07:43:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:53.210 07:43:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:53.210 07:43:57 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:53.210 07:43:57 -- common/autotest_common.sh@640 -- # local es=0 00:24:53.210 07:43:57 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:53.210 07:43:57 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:24:53.210 07:43:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:53.210 07:43:57 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:24:53.210 07:43:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:53.210 07:43:57 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:53.210 07:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:53.210 07:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:53.470 request: 00:24:53.470 { 00:24:53.470 "name": "NVMe0", 00:24:53.470 "trtype": "tcp", 00:24:53.470 "traddr": "10.0.0.2", 00:24:53.470 "hostaddr": "10.0.0.2", 00:24:53.470 "hostsvcid": "60000", 00:24:53.470 "adrfam": "ipv4", 00:24:53.470 "trsvcid": "4420", 00:24:53.470 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:53.470 "method": "bdev_nvme_attach_controller", 00:24:53.470 "req_id": 1 00:24:53.470 } 00:24:53.470 Got JSON-RPC error response 00:24:53.470 response: 00:24:53.470 { 00:24:53.470 "code": -114, 00:24:53.470 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:53.470 } 00:24:53.470 07:43:57 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:24:53.470 07:43:57 -- common/autotest_common.sh@643 -- # es=1 00:24:53.470 07:43:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:53.470 07:43:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:53.470 07:43:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:53.470 07:43:57 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:53.470 07:43:57 -- common/autotest_common.sh@640 -- # local es=0 00:24:53.470 07:43:57 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:53.470 07:43:57 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:24:53.470 07:43:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:53.470 07:43:57 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:24:53.470 07:43:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:53.470 07:43:57 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:53.470 07:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:53.470 07:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:53.470 request: 00:24:53.470 { 00:24:53.470 "name": "NVMe0", 00:24:53.470 "trtype": "tcp", 00:24:53.470 "traddr": "10.0.0.2", 00:24:53.470 "hostaddr": "10.0.0.2", 00:24:53.470 "hostsvcid": "60000", 00:24:53.470 "adrfam": "ipv4", 00:24:53.470 "trsvcid": "4420", 00:24:53.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.470 "multipath": "disable", 00:24:53.470 "method": "bdev_nvme_attach_controller", 00:24:53.470 "req_id": 1 00:24:53.470 } 00:24:53.470 Got JSON-RPC error response 00:24:53.470 response: 00:24:53.470 { 00:24:53.470 "code": -114, 00:24:53.470 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:53.470 } 00:24:53.470 07:43:57 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:24:53.470 07:43:57 -- common/autotest_common.sh@643 -- # es=1 00:24:53.470 07:43:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:53.470 07:43:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:53.470 07:43:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:53.470 07:43:57 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:53.470 07:43:57 -- common/autotest_common.sh@640 -- # local es=0 00:24:53.470 07:43:57 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:53.470 07:43:57 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:24:53.470 07:43:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:53.470 07:43:57 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:24:53.470 07:43:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:53.470 07:43:57 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:53.470 07:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:53.470 07:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:53.470 request: 00:24:53.470 { 00:24:53.470 "name": "NVMe0", 00:24:53.470 "trtype": "tcp", 00:24:53.470 "traddr": "10.0.0.2", 00:24:53.470 "hostaddr": "10.0.0.2", 00:24:53.470 "hostsvcid": "60000", 00:24:53.470 "adrfam": "ipv4", 00:24:53.470 "trsvcid": "4420", 00:24:53.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.470 "multipath": "failover", 00:24:53.470 "method": "bdev_nvme_attach_controller", 00:24:53.470 "req_id": 1 00:24:53.470 } 00:24:53.470 Got JSON-RPC error response 00:24:53.470 response: 00:24:53.470 { 00:24:53.470 "code": -114, 00:24:53.470 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:53.470 } 00:24:53.470 07:43:57 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:24:53.470 07:43:57 -- common/autotest_common.sh@643 -- # es=1 00:24:53.470 07:43:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:53.470 07:43:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:53.470 07:43:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:53.470 07:43:57 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:53.470 07:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:53.470 07:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:53.730 00:24:53.730 07:43:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:53.730 07:43:57 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:53.730 07:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:53.730 07:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:53.730 07:43:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:53.730 07:43:57 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:53.730 07:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:53.730 07:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:53.730 00:24:53.730 07:43:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:53.730 07:43:57 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:53.730 07:43:57 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:53.730 07:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:53.730 07:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:53.730 07:43:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:53.730 07:43:57 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:53.730 07:43:57 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:55.109 0 00:24:55.109 07:43:58 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:55.109 07:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.109 07:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:55.109 07:43:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.109 07:43:58 -- host/multicontroller.sh@100 -- # killprocess 29074 00:24:55.109 07:43:58 -- common/autotest_common.sh@926 -- # '[' -z 29074 ']' 00:24:55.109 07:43:58 -- common/autotest_common.sh@930 -- # kill -0 29074 00:24:55.109 07:43:58 -- common/autotest_common.sh@931 -- # uname 00:24:55.109 07:43:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:55.109 07:43:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 29074 00:24:55.109 07:43:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:55.109 07:43:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:55.109 07:43:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 29074' 00:24:55.109 killing process with pid 29074 00:24:55.109 07:43:58 -- common/autotest_common.sh@945 -- # kill 29074 00:24:55.109 07:43:58 -- common/autotest_common.sh@950 -- # wait 29074 00:24:55.109 07:43:58 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:55.109 07:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.109 07:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:55.109 07:43:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.109 07:43:58 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:55.109 07:43:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.109 07:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:55.109 07:43:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.109 07:43:58 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:55.109 07:43:58 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:55.109 07:43:58 -- common/autotest_common.sh@1597 -- # read -r file 00:24:55.109 07:43:58 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:55.109 07:43:58 -- common/autotest_common.sh@1596 -- # sort -u 00:24:55.109 07:43:59 -- common/autotest_common.sh@1598 -- # cat 00:24:55.109 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:55.109 [2024-10-07 07:43:56.119525] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:55.109 [2024-10-07 07:43:56.119576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid29074 ] 00:24:55.109 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.109 [2024-10-07 07:43:56.175114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.109 [2024-10-07 07:43:56.245064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.109 [2024-10-07 07:43:57.567512] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 46b3c911-2e0b-479b-b462-c35af934ddf5 already exists 00:24:55.109 [2024-10-07 07:43:57.567540] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:46b3c911-2e0b-479b-b462-c35af934ddf5 alias for bdev NVMe1n1 00:24:55.109 [2024-10-07 07:43:57.567550] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:55.109 Running I/O for 1 seconds... 00:24:55.109 00:24:55.109 Latency(us) 00:24:55.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.109 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:55.109 NVMe0n1 : 1.00 26359.83 102.97 0.00 0.00 4845.64 3042.74 8488.47 00:24:55.109 =================================================================================================================== 00:24:55.109 Total : 26359.83 102.97 0.00 0.00 4845.64 3042.74 8488.47 00:24:55.109 Received shutdown signal, test time was about 1.000000 seconds 00:24:55.109 00:24:55.109 Latency(us) 00:24:55.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.109 =================================================================================================================== 00:24:55.109 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.109 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:55.109 07:43:59 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:55.109 07:43:59 -- common/autotest_common.sh@1597 -- # read -r file 00:24:55.109 07:43:59 -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:55.109 07:43:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:55.109 07:43:59 -- nvmf/common.sh@116 -- # sync 00:24:55.109 07:43:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:55.109 07:43:59 -- nvmf/common.sh@119 -- # set +e 00:24:55.109 07:43:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:55.109 07:43:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:55.109 rmmod nvme_tcp 00:24:55.109 rmmod nvme_fabrics 00:24:55.109 rmmod nvme_keyring 00:24:55.109 07:43:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:55.369 07:43:59 -- nvmf/common.sh@123 -- # set -e 00:24:55.369 07:43:59 -- nvmf/common.sh@124 -- # return 0 00:24:55.369 07:43:59 -- nvmf/common.sh@477 -- # '[' -n 28832 ']' 00:24:55.369 07:43:59 -- nvmf/common.sh@478 -- # killprocess 28832 00:24:55.369 07:43:59 -- common/autotest_common.sh@926 -- # '[' -z 28832 ']' 00:24:55.369 07:43:59 -- common/autotest_common.sh@930 -- # kill -0 28832 00:24:55.369 07:43:59 -- common/autotest_common.sh@931 -- # uname 00:24:55.369 07:43:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:55.369 07:43:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 28832 00:24:55.369 07:43:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:55.369 07:43:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:55.369 07:43:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 28832' 00:24:55.369 killing process with pid 28832 00:24:55.369 07:43:59 -- common/autotest_common.sh@945 -- # kill 28832 00:24:55.369 07:43:59 -- common/autotest_common.sh@950 -- # wait 28832 00:24:55.629 07:43:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:55.629 07:43:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:55.629 07:43:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:55.629 07:43:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:55.629 07:43:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:55.629 07:43:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.629 07:43:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.629 07:43:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.537 07:44:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:57.537 00:24:57.537 real 0m12.141s 00:24:57.537 user 0m17.257s 00:24:57.537 sys 0m4.952s 00:24:57.537 07:44:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.537 07:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:57.537 ************************************ 00:24:57.537 END TEST nvmf_multicontroller 00:24:57.537 ************************************ 00:24:57.537 07:44:01 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:57.537 07:44:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:57.537 07:44:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:57.537 07:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:57.537 ************************************ 00:24:57.537 START TEST nvmf_aer 00:24:57.537 ************************************ 00:24:57.537 07:44:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:57.797 * Looking for test storage... 00:24:57.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.797 07:44:01 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.797 07:44:01 -- nvmf/common.sh@7 -- # uname -s 00:24:57.797 07:44:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.797 07:44:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.797 07:44:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.797 07:44:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.797 07:44:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.797 07:44:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.797 07:44:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.797 07:44:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.797 07:44:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.797 07:44:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.797 07:44:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:57.797 07:44:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:57.797 07:44:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.797 07:44:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.797 07:44:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.797 07:44:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.797 07:44:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.797 07:44:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.797 07:44:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.797 07:44:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.797 07:44:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.797 07:44:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.797 07:44:01 -- paths/export.sh@5 -- # export PATH 00:24:57.797 07:44:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.797 07:44:01 -- nvmf/common.sh@46 -- # : 0 00:24:57.797 07:44:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:57.797 07:44:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:57.797 07:44:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:57.797 07:44:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.797 07:44:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.797 07:44:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:57.797 07:44:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:57.797 07:44:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:57.797 07:44:01 -- host/aer.sh@11 -- # nvmftestinit 00:24:57.797 07:44:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:57.797 07:44:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.797 07:44:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:57.797 07:44:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:57.797 07:44:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:57.797 07:44:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.797 07:44:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.797 07:44:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.797 07:44:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:57.797 07:44:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:57.797 07:44:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:57.797 07:44:01 -- common/autotest_common.sh@10 -- # set +x 00:25:03.074 07:44:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:03.074 07:44:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:03.074 07:44:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:03.074 07:44:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:03.074 07:44:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:03.074 07:44:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:03.074 07:44:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:03.074 07:44:06 -- nvmf/common.sh@294 -- # net_devs=() 00:25:03.074 07:44:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:03.074 07:44:06 -- nvmf/common.sh@295 -- # e810=() 00:25:03.074 07:44:06 -- nvmf/common.sh@295 -- # local -ga e810 00:25:03.074 07:44:06 -- nvmf/common.sh@296 -- # x722=() 00:25:03.074 07:44:06 -- nvmf/common.sh@296 -- # local -ga x722 00:25:03.074 07:44:06 -- nvmf/common.sh@297 -- # mlx=() 00:25:03.074 07:44:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:03.074 07:44:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.074 07:44:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.074 07:44:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.074 07:44:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.074 07:44:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.074 07:44:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.074 07:44:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.074 07:44:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.074 07:44:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.074 07:44:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.074 07:44:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.074 07:44:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:03.074 07:44:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:03.074 07:44:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:03.074 07:44:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:03.074 07:44:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:03.074 07:44:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:03.074 07:44:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:03.074 07:44:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:03.074 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:03.074 07:44:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:03.074 07:44:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:03.074 07:44:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.074 07:44:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.074 07:44:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:03.074 07:44:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:03.074 07:44:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:03.074 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:03.074 07:44:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:03.074 07:44:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:03.074 07:44:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.074 07:44:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.074 07:44:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:03.074 07:44:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:03.074 07:44:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:03.074 07:44:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:03.074 07:44:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:03.074 07:44:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.074 07:44:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:03.074 07:44:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.074 07:44:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:03.074 Found net devices under 0000:af:00.0: cvl_0_0 00:25:03.074 07:44:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.074 07:44:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:03.074 07:44:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.074 07:44:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:03.075 07:44:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.075 07:44:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:03.075 Found net devices under 0000:af:00.1: cvl_0_1 00:25:03.075 07:44:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.075 07:44:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:03.075 07:44:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:03.075 07:44:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:03.075 07:44:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:03.075 07:44:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:03.075 07:44:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.075 07:44:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.075 07:44:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.075 07:44:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:03.075 07:44:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.075 07:44:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.075 07:44:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:03.075 07:44:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.075 07:44:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.075 07:44:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:03.075 07:44:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:03.075 07:44:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.075 07:44:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.075 07:44:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.075 07:44:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.075 07:44:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:03.075 07:44:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.075 07:44:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.075 07:44:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.075 07:44:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:03.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:25:03.075 00:25:03.075 --- 10.0.0.2 ping statistics --- 00:25:03.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.075 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:25:03.075 07:44:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:25:03.075 00:25:03.075 --- 10.0.0.1 ping statistics --- 00:25:03.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.075 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:25:03.075 07:44:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.075 07:44:07 -- nvmf/common.sh@410 -- # return 0 00:25:03.075 07:44:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:03.075 07:44:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.075 07:44:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:03.075 07:44:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:03.075 07:44:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.075 07:44:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:03.075 07:44:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:03.334 07:44:07 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:03.334 07:44:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:03.334 07:44:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:03.334 07:44:07 -- common/autotest_common.sh@10 -- # set +x 00:25:03.334 07:44:07 -- nvmf/common.sh@469 -- # nvmfpid=32968 00:25:03.334 07:44:07 -- nvmf/common.sh@470 -- # waitforlisten 32968 00:25:03.334 07:44:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:03.334 07:44:07 -- common/autotest_common.sh@819 -- # '[' -z 32968 ']' 00:25:03.334 07:44:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.334 07:44:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:03.334 07:44:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.334 07:44:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:03.334 07:44:07 -- common/autotest_common.sh@10 -- # set +x 00:25:03.334 [2024-10-07 07:44:07.098887] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:03.334 [2024-10-07 07:44:07.098929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.334 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.334 [2024-10-07 07:44:07.157359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.334 [2024-10-07 07:44:07.234165] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:03.334 [2024-10-07 07:44:07.234274] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.334 [2024-10-07 07:44:07.234283] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.334 [2024-10-07 07:44:07.234289] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.334 [2024-10-07 07:44:07.234332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.334 [2024-10-07 07:44:07.234427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.334 [2024-10-07 07:44:07.234513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:03.334 [2024-10-07 07:44:07.234514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.273 07:44:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:04.273 07:44:07 -- common/autotest_common.sh@852 -- # return 0 00:25:04.273 07:44:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:04.273 07:44:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:04.273 07:44:07 -- common/autotest_common.sh@10 -- # set +x 00:25:04.273 07:44:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.273 07:44:07 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:04.274 07:44:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.274 07:44:07 -- common/autotest_common.sh@10 -- # set +x 00:25:04.274 [2024-10-07 07:44:07.955371] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.274 07:44:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.274 07:44:07 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:04.274 07:44:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.274 07:44:07 -- common/autotest_common.sh@10 -- # set +x 00:25:04.274 Malloc0 00:25:04.274 07:44:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.274 07:44:07 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:04.274 07:44:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.274 07:44:07 -- common/autotest_common.sh@10 -- # set +x 00:25:04.274 07:44:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.274 07:44:07 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:04.274 07:44:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.274 07:44:07 -- common/autotest_common.sh@10 -- # set +x 00:25:04.274 07:44:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.274 07:44:08 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.274 07:44:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.274 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:25:04.274 [2024-10-07 07:44:08.010924] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.274 07:44:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.274 07:44:08 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:04.274 07:44:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.274 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:25:04.274 [2024-10-07 07:44:08.018715] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:04.274 [ 00:25:04.274 { 00:25:04.274 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:04.274 "subtype": "Discovery", 00:25:04.274 "listen_addresses": [], 00:25:04.274 "allow_any_host": true, 00:25:04.274 "hosts": [] 00:25:04.274 }, 00:25:04.274 { 00:25:04.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.274 "subtype": "NVMe", 00:25:04.274 "listen_addresses": [ 00:25:04.274 { 00:25:04.274 "transport": "TCP", 00:25:04.274 "trtype": "TCP", 00:25:04.274 "adrfam": "IPv4", 00:25:04.274 "traddr": "10.0.0.2", 00:25:04.274 "trsvcid": "4420" 00:25:04.274 } 00:25:04.274 ], 00:25:04.274 "allow_any_host": true, 00:25:04.274 "hosts": [], 00:25:04.274 "serial_number": "SPDK00000000000001", 00:25:04.274 "model_number": "SPDK bdev Controller", 00:25:04.274 "max_namespaces": 2, 00:25:04.274 "min_cntlid": 1, 00:25:04.274 "max_cntlid": 65519, 00:25:04.274 "namespaces": [ 00:25:04.274 { 00:25:04.274 "nsid": 1, 00:25:04.274 "bdev_name": "Malloc0", 00:25:04.274 "name": "Malloc0", 00:25:04.274 "nguid": "4D5F359548524D3E81229C71824F099A", 00:25:04.274 "uuid": "4d5f3595-4852-4d3e-8122-9c71824f099a" 00:25:04.274 } 00:25:04.274 ] 00:25:04.274 } 00:25:04.274 ] 00:25:04.274 07:44:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.274 07:44:08 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:04.274 07:44:08 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:04.274 07:44:08 -- host/aer.sh@33 -- # aerpid=33061 00:25:04.274 07:44:08 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:04.274 07:44:08 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:04.274 07:44:08 -- common/autotest_common.sh@1244 -- # local i=0 00:25:04.274 07:44:08 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:04.274 07:44:08 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:25:04.274 07:44:08 -- common/autotest_common.sh@1247 -- # i=1 00:25:04.274 07:44:08 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:04.274 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.274 07:44:08 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:04.274 07:44:08 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:25:04.274 07:44:08 -- common/autotest_common.sh@1247 -- # i=2 00:25:04.274 07:44:08 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:04.533 07:44:08 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:04.533 07:44:08 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:04.533 07:44:08 -- common/autotest_common.sh@1255 -- # return 0 00:25:04.533 07:44:08 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:04.533 07:44:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.533 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:25:04.533 Malloc1 00:25:04.533 07:44:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.533 07:44:08 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:04.533 07:44:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.533 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:25:04.533 07:44:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.533 07:44:08 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:04.533 07:44:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.533 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:25:04.533 Asynchronous Event Request test 00:25:04.533 Attaching to 10.0.0.2 00:25:04.533 Attached to 10.0.0.2 00:25:04.533 Registering asynchronous event callbacks... 00:25:04.533 Starting namespace attribute notice tests for all controllers... 00:25:04.533 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:04.533 aer_cb - Changed Namespace 00:25:04.533 Cleaning up... 00:25:04.533 [ 00:25:04.533 { 00:25:04.533 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:04.533 "subtype": "Discovery", 00:25:04.533 "listen_addresses": [], 00:25:04.533 "allow_any_host": true, 00:25:04.533 "hosts": [] 00:25:04.533 }, 00:25:04.533 { 00:25:04.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.533 "subtype": "NVMe", 00:25:04.533 "listen_addresses": [ 00:25:04.533 { 00:25:04.533 "transport": "TCP", 00:25:04.533 "trtype": "TCP", 00:25:04.533 "adrfam": "IPv4", 00:25:04.533 "traddr": "10.0.0.2", 00:25:04.533 "trsvcid": "4420" 00:25:04.533 } 00:25:04.533 ], 00:25:04.533 "allow_any_host": true, 00:25:04.533 "hosts": [], 00:25:04.533 "serial_number": "SPDK00000000000001", 00:25:04.533 "model_number": "SPDK bdev Controller", 00:25:04.533 "max_namespaces": 2, 00:25:04.533 "min_cntlid": 1, 00:25:04.533 "max_cntlid": 65519, 00:25:04.533 "namespaces": [ 00:25:04.533 { 00:25:04.533 "nsid": 1, 00:25:04.533 "bdev_name": "Malloc0", 00:25:04.533 "name": "Malloc0", 00:25:04.533 "nguid": "4D5F359548524D3E81229C71824F099A", 00:25:04.533 "uuid": "4d5f3595-4852-4d3e-8122-9c71824f099a" 00:25:04.533 }, 00:25:04.534 { 00:25:04.534 "nsid": 2, 00:25:04.534 "bdev_name": "Malloc1", 00:25:04.534 "name": "Malloc1", 00:25:04.534 "nguid": "723F74E04BA34496A54F111C8EE60DBE", 00:25:04.534 "uuid": "723f74e0-4ba3-4496-a54f-111c8ee60dbe" 00:25:04.534 } 00:25:04.534 ] 00:25:04.534 } 00:25:04.534 ] 00:25:04.534 07:44:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.534 07:44:08 -- host/aer.sh@43 -- # wait 33061 00:25:04.534 07:44:08 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:04.534 07:44:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.534 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:25:04.534 07:44:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.534 07:44:08 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:04.534 07:44:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.534 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:25:04.534 07:44:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.534 07:44:08 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:04.534 07:44:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.534 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:25:04.534 07:44:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.534 07:44:08 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:04.534 07:44:08 -- host/aer.sh@51 -- # nvmftestfini 00:25:04.534 07:44:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:04.534 07:44:08 -- nvmf/common.sh@116 -- # sync 00:25:04.534 07:44:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:04.534 07:44:08 -- nvmf/common.sh@119 -- # set +e 00:25:04.534 07:44:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:04.534 07:44:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:04.534 rmmod nvme_tcp 00:25:04.534 rmmod nvme_fabrics 00:25:04.534 rmmod nvme_keyring 00:25:04.534 07:44:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:04.534 07:44:08 -- nvmf/common.sh@123 -- # set -e 00:25:04.534 07:44:08 -- nvmf/common.sh@124 -- # return 0 00:25:04.534 07:44:08 -- nvmf/common.sh@477 -- # '[' -n 32968 ']' 00:25:04.534 07:44:08 -- nvmf/common.sh@478 -- # killprocess 32968 00:25:04.534 07:44:08 -- common/autotest_common.sh@926 -- # '[' -z 32968 ']' 00:25:04.534 07:44:08 -- common/autotest_common.sh@930 -- # kill -0 32968 00:25:04.534 07:44:08 -- common/autotest_common.sh@931 -- # uname 00:25:04.534 07:44:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:04.534 07:44:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 32968 00:25:04.534 07:44:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:04.534 07:44:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:04.534 07:44:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 32968' 00:25:04.534 killing process with pid 32968 00:25:04.534 07:44:08 -- common/autotest_common.sh@945 -- # kill 32968 00:25:04.534 [2024-10-07 07:44:08.479965] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:04.534 07:44:08 -- common/autotest_common.sh@950 -- # wait 32968 00:25:04.793 07:44:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:04.793 07:44:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:04.793 07:44:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:04.793 07:44:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:04.793 07:44:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:04.793 07:44:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.793 07:44:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:04.793 07:44:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.448 07:44:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:07.448 00:25:07.448 real 0m9.271s 00:25:07.448 user 0m7.385s 00:25:07.448 sys 0m4.462s 00:25:07.448 07:44:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.448 07:44:10 -- common/autotest_common.sh@10 -- # set +x 00:25:07.448 ************************************ 00:25:07.448 END TEST nvmf_aer 00:25:07.448 ************************************ 00:25:07.448 07:44:10 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:07.448 07:44:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:07.448 07:44:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:07.448 07:44:10 -- common/autotest_common.sh@10 -- # set +x 00:25:07.448 ************************************ 00:25:07.448 START TEST nvmf_async_init 00:25:07.448 ************************************ 00:25:07.448 07:44:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:07.448 * Looking for test storage... 00:25:07.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:07.448 07:44:10 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.448 07:44:10 -- nvmf/common.sh@7 -- # uname -s 00:25:07.448 07:44:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.448 07:44:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.448 07:44:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.448 07:44:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.448 07:44:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.448 07:44:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.448 07:44:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.448 07:44:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.448 07:44:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.448 07:44:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.448 07:44:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:07.448 07:44:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:07.448 07:44:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.448 07:44:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.448 07:44:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.448 07:44:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.448 07:44:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.448 07:44:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.448 07:44:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.448 07:44:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.448 07:44:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.448 07:44:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.448 07:44:10 -- paths/export.sh@5 -- # export PATH 00:25:07.448 07:44:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.448 07:44:10 -- nvmf/common.sh@46 -- # : 0 00:25:07.448 07:44:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:07.448 07:44:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:07.448 07:44:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:07.448 07:44:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.448 07:44:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.448 07:44:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:07.448 07:44:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:07.448 07:44:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:07.448 07:44:10 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:07.448 07:44:10 -- host/async_init.sh@14 -- # null_block_size=512 00:25:07.448 07:44:10 -- host/async_init.sh@15 -- # null_bdev=null0 00:25:07.448 07:44:10 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:07.448 07:44:10 -- host/async_init.sh@20 -- # uuidgen 00:25:07.448 07:44:10 -- host/async_init.sh@20 -- # tr -d - 00:25:07.448 07:44:10 -- host/async_init.sh@20 -- # nguid=58e7fcfe3d514b47a67993b213568ed8 00:25:07.448 07:44:10 -- host/async_init.sh@22 -- # nvmftestinit 00:25:07.448 07:44:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:07.448 07:44:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.448 07:44:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:07.448 07:44:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:07.448 07:44:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:07.448 07:44:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.448 07:44:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.448 07:44:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.448 07:44:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:07.448 07:44:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:07.448 07:44:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:07.448 07:44:10 -- common/autotest_common.sh@10 -- # set +x 00:25:12.838 07:44:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:12.838 07:44:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:12.838 07:44:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:12.838 07:44:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:12.838 07:44:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:12.838 07:44:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:12.838 07:44:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:12.838 07:44:15 -- nvmf/common.sh@294 -- # net_devs=() 00:25:12.838 07:44:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:12.838 07:44:15 -- nvmf/common.sh@295 -- # e810=() 00:25:12.838 07:44:15 -- nvmf/common.sh@295 -- # local -ga e810 00:25:12.838 07:44:15 -- nvmf/common.sh@296 -- # x722=() 00:25:12.838 07:44:15 -- nvmf/common.sh@296 -- # local -ga x722 00:25:12.838 07:44:15 -- nvmf/common.sh@297 -- # mlx=() 00:25:12.838 07:44:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:12.838 07:44:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.838 07:44:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.838 07:44:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.838 07:44:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.838 07:44:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.838 07:44:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.838 07:44:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.838 07:44:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.838 07:44:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.838 07:44:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.838 07:44:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.838 07:44:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:12.838 07:44:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:12.838 07:44:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:12.838 07:44:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:12.838 07:44:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:12.838 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:12.838 07:44:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:12.838 07:44:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:12.838 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:12.838 07:44:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:12.838 07:44:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:12.838 07:44:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.838 07:44:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:12.838 07:44:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.838 07:44:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:12.838 Found net devices under 0000:af:00.0: cvl_0_0 00:25:12.838 07:44:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.838 07:44:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:12.838 07:44:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.838 07:44:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:12.838 07:44:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.838 07:44:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:12.838 Found net devices under 0000:af:00.1: cvl_0_1 00:25:12.838 07:44:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.838 07:44:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:12.838 07:44:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:12.838 07:44:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:12.838 07:44:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:12.839 07:44:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:12.839 07:44:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.839 07:44:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.839 07:44:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.839 07:44:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:12.839 07:44:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.839 07:44:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.839 07:44:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:12.839 07:44:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.839 07:44:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.839 07:44:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:12.839 07:44:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:12.839 07:44:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.839 07:44:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:12.839 07:44:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:12.839 07:44:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:12.839 07:44:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:12.839 07:44:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:12.839 07:44:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:12.839 07:44:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:12.839 07:44:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:12.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:25:12.839 00:25:12.839 --- 10.0.0.2 ping statistics --- 00:25:12.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.839 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:25:12.839 07:44:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:12.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:25:12.839 00:25:12.839 --- 10.0.0.1 ping statistics --- 00:25:12.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.839 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:25:12.839 07:44:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.839 07:44:16 -- nvmf/common.sh@410 -- # return 0 00:25:12.839 07:44:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:12.839 07:44:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.839 07:44:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:12.839 07:44:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:12.839 07:44:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.839 07:44:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:12.839 07:44:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:12.839 07:44:16 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:12.839 07:44:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:12.839 07:44:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:12.839 07:44:16 -- common/autotest_common.sh@10 -- # set +x 00:25:12.839 07:44:16 -- nvmf/common.sh@469 -- # nvmfpid=36549 00:25:12.839 07:44:16 -- nvmf/common.sh@470 -- # waitforlisten 36549 00:25:12.839 07:44:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:12.839 07:44:16 -- common/autotest_common.sh@819 -- # '[' -z 36549 ']' 00:25:12.839 07:44:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.839 07:44:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:12.839 07:44:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.839 07:44:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:12.839 07:44:16 -- common/autotest_common.sh@10 -- # set +x 00:25:12.839 [2024-10-07 07:44:16.283038] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:12.839 [2024-10-07 07:44:16.283100] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.839 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.839 [2024-10-07 07:44:16.342378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.839 [2024-10-07 07:44:16.419876] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:12.839 [2024-10-07 07:44:16.419980] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.839 [2024-10-07 07:44:16.419988] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.839 [2024-10-07 07:44:16.419994] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.839 [2024-10-07 07:44:16.420016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.408 07:44:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:13.408 07:44:17 -- common/autotest_common.sh@852 -- # return 0 00:25:13.408 07:44:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:13.408 07:44:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:13.408 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.408 07:44:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.408 07:44:17 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:13.408 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.408 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.408 [2024-10-07 07:44:17.126643] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.408 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.408 07:44:17 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:13.408 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.408 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.408 null0 00:25:13.408 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.408 07:44:17 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:13.408 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.408 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.408 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.408 07:44:17 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:13.408 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.408 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.408 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.408 07:44:17 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 58e7fcfe3d514b47a67993b213568ed8 00:25:13.408 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.408 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.408 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.408 07:44:17 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:13.408 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.408 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.408 [2024-10-07 07:44:17.170876] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.408 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.408 07:44:17 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:13.408 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.408 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.668 nvme0n1 00:25:13.668 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.668 07:44:17 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:13.668 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.668 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.668 [ 00:25:13.668 { 00:25:13.668 "name": "nvme0n1", 00:25:13.668 "aliases": [ 00:25:13.668 "58e7fcfe-3d51-4b47-a679-93b213568ed8" 00:25:13.668 ], 00:25:13.668 "product_name": "NVMe disk", 00:25:13.668 "block_size": 512, 00:25:13.668 "num_blocks": 2097152, 00:25:13.668 "uuid": "58e7fcfe-3d51-4b47-a679-93b213568ed8", 00:25:13.668 "assigned_rate_limits": { 00:25:13.668 "rw_ios_per_sec": 0, 00:25:13.668 "rw_mbytes_per_sec": 0, 00:25:13.668 "r_mbytes_per_sec": 0, 00:25:13.668 "w_mbytes_per_sec": 0 00:25:13.668 }, 00:25:13.668 "claimed": false, 00:25:13.668 "zoned": false, 00:25:13.668 "supported_io_types": { 00:25:13.668 "read": true, 00:25:13.668 "write": true, 00:25:13.668 "unmap": false, 00:25:13.668 "write_zeroes": true, 00:25:13.668 "flush": true, 00:25:13.668 "reset": true, 00:25:13.668 "compare": true, 00:25:13.668 "compare_and_write": true, 00:25:13.668 "abort": true, 00:25:13.668 "nvme_admin": true, 00:25:13.668 "nvme_io": true 00:25:13.668 }, 00:25:13.668 "driver_specific": { 00:25:13.668 "nvme": [ 00:25:13.668 { 00:25:13.668 "trid": { 00:25:13.668 "trtype": "TCP", 00:25:13.668 "adrfam": "IPv4", 00:25:13.668 "traddr": "10.0.0.2", 00:25:13.668 "trsvcid": "4420", 00:25:13.668 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:13.668 }, 00:25:13.668 "ctrlr_data": { 00:25:13.668 "cntlid": 1, 00:25:13.668 "vendor_id": "0x8086", 00:25:13.668 "model_number": "SPDK bdev Controller", 00:25:13.668 "serial_number": "00000000000000000000", 00:25:13.668 "firmware_revision": "24.01.1", 00:25:13.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:13.668 "oacs": { 00:25:13.668 "security": 0, 00:25:13.668 "format": 0, 00:25:13.668 "firmware": 0, 00:25:13.668 "ns_manage": 0 00:25:13.668 }, 00:25:13.668 "multi_ctrlr": true, 00:25:13.668 "ana_reporting": false 00:25:13.668 }, 00:25:13.668 "vs": { 00:25:13.668 "nvme_version": "1.3" 00:25:13.668 }, 00:25:13.668 "ns_data": { 00:25:13.668 "id": 1, 00:25:13.668 "can_share": true 00:25:13.668 } 00:25:13.668 } 00:25:13.668 ], 00:25:13.668 "mp_policy": "active_passive" 00:25:13.668 } 00:25:13.668 } 00:25:13.668 ] 00:25:13.668 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.668 07:44:17 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:13.668 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.668 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.668 [2024-10-07 07:44:17.435443] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:13.668 [2024-10-07 07:44:17.435494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17547a0 (9): Bad file descriptor 00:25:13.668 [2024-10-07 07:44:17.567128] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:13.668 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.668 07:44:17 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:13.668 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.668 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.668 [ 00:25:13.668 { 00:25:13.668 "name": "nvme0n1", 00:25:13.668 "aliases": [ 00:25:13.668 "58e7fcfe-3d51-4b47-a679-93b213568ed8" 00:25:13.668 ], 00:25:13.668 "product_name": "NVMe disk", 00:25:13.668 "block_size": 512, 00:25:13.668 "num_blocks": 2097152, 00:25:13.668 "uuid": "58e7fcfe-3d51-4b47-a679-93b213568ed8", 00:25:13.668 "assigned_rate_limits": { 00:25:13.668 "rw_ios_per_sec": 0, 00:25:13.668 "rw_mbytes_per_sec": 0, 00:25:13.668 "r_mbytes_per_sec": 0, 00:25:13.668 "w_mbytes_per_sec": 0 00:25:13.668 }, 00:25:13.668 "claimed": false, 00:25:13.668 "zoned": false, 00:25:13.668 "supported_io_types": { 00:25:13.668 "read": true, 00:25:13.668 "write": true, 00:25:13.668 "unmap": false, 00:25:13.668 "write_zeroes": true, 00:25:13.668 "flush": true, 00:25:13.668 "reset": true, 00:25:13.668 "compare": true, 00:25:13.668 "compare_and_write": true, 00:25:13.668 "abort": true, 00:25:13.668 "nvme_admin": true, 00:25:13.668 "nvme_io": true 00:25:13.668 }, 00:25:13.668 "driver_specific": { 00:25:13.668 "nvme": [ 00:25:13.668 { 00:25:13.668 "trid": { 00:25:13.668 "trtype": "TCP", 00:25:13.668 "adrfam": "IPv4", 00:25:13.668 "traddr": "10.0.0.2", 00:25:13.668 "trsvcid": "4420", 00:25:13.668 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:13.668 }, 00:25:13.668 "ctrlr_data": { 00:25:13.668 "cntlid": 2, 00:25:13.668 "vendor_id": "0x8086", 00:25:13.668 "model_number": "SPDK bdev Controller", 00:25:13.668 "serial_number": "00000000000000000000", 00:25:13.668 "firmware_revision": "24.01.1", 00:25:13.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:13.668 "oacs": { 00:25:13.668 "security": 0, 00:25:13.668 "format": 0, 00:25:13.668 "firmware": 0, 00:25:13.668 "ns_manage": 0 00:25:13.668 }, 00:25:13.668 "multi_ctrlr": true, 00:25:13.668 "ana_reporting": false 00:25:13.668 }, 00:25:13.668 "vs": { 00:25:13.668 "nvme_version": "1.3" 00:25:13.669 }, 00:25:13.669 "ns_data": { 00:25:13.669 "id": 1, 00:25:13.669 "can_share": true 00:25:13.669 } 00:25:13.669 } 00:25:13.669 ], 00:25:13.669 "mp_policy": "active_passive" 00:25:13.669 } 00:25:13.669 } 00:25:13.669 ] 00:25:13.669 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.669 07:44:17 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.669 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.669 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.669 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.669 07:44:17 -- host/async_init.sh@53 -- # mktemp 00:25:13.669 07:44:17 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.TUuje6oWEF 00:25:13.669 07:44:17 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:13.669 07:44:17 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.TUuje6oWEF 00:25:13.669 07:44:17 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:13.669 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.669 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.669 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.669 07:44:17 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:13.669 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.669 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.669 [2024-10-07 07:44:17.632030] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:13.669 [2024-10-07 07:44:17.632132] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:13.669 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.669 07:44:17 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TUuje6oWEF 00:25:13.669 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.669 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.929 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.929 07:44:17 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TUuje6oWEF 00:25:13.929 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.929 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.929 [2024-10-07 07:44:17.652083] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:13.929 nvme0n1 00:25:13.929 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.929 07:44:17 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:13.929 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.929 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.929 [ 00:25:13.929 { 00:25:13.929 "name": "nvme0n1", 00:25:13.929 "aliases": [ 00:25:13.929 "58e7fcfe-3d51-4b47-a679-93b213568ed8" 00:25:13.929 ], 00:25:13.929 "product_name": "NVMe disk", 00:25:13.929 "block_size": 512, 00:25:13.929 "num_blocks": 2097152, 00:25:13.929 "uuid": "58e7fcfe-3d51-4b47-a679-93b213568ed8", 00:25:13.929 "assigned_rate_limits": { 00:25:13.929 "rw_ios_per_sec": 0, 00:25:13.929 "rw_mbytes_per_sec": 0, 00:25:13.929 "r_mbytes_per_sec": 0, 00:25:13.929 "w_mbytes_per_sec": 0 00:25:13.929 }, 00:25:13.929 "claimed": false, 00:25:13.929 "zoned": false, 00:25:13.929 "supported_io_types": { 00:25:13.929 "read": true, 00:25:13.929 "write": true, 00:25:13.929 "unmap": false, 00:25:13.929 "write_zeroes": true, 00:25:13.929 "flush": true, 00:25:13.929 "reset": true, 00:25:13.929 "compare": true, 00:25:13.929 "compare_and_write": true, 00:25:13.929 "abort": true, 00:25:13.929 "nvme_admin": true, 00:25:13.929 "nvme_io": true 00:25:13.929 }, 00:25:13.929 "driver_specific": { 00:25:13.929 "nvme": [ 00:25:13.929 { 00:25:13.929 "trid": { 00:25:13.929 "trtype": "TCP", 00:25:13.929 "adrfam": "IPv4", 00:25:13.929 "traddr": "10.0.0.2", 00:25:13.929 "trsvcid": "4421", 00:25:13.929 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:13.929 }, 00:25:13.929 "ctrlr_data": { 00:25:13.929 "cntlid": 3, 00:25:13.929 "vendor_id": "0x8086", 00:25:13.929 "model_number": "SPDK bdev Controller", 00:25:13.929 "serial_number": "00000000000000000000", 00:25:13.929 "firmware_revision": "24.01.1", 00:25:13.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:13.929 "oacs": { 00:25:13.929 "security": 0, 00:25:13.929 "format": 0, 00:25:13.929 "firmware": 0, 00:25:13.929 "ns_manage": 0 00:25:13.929 }, 00:25:13.929 "multi_ctrlr": true, 00:25:13.929 "ana_reporting": false 00:25:13.929 }, 00:25:13.929 "vs": { 00:25:13.929 "nvme_version": "1.3" 00:25:13.929 }, 00:25:13.929 "ns_data": { 00:25:13.929 "id": 1, 00:25:13.929 "can_share": true 00:25:13.929 } 00:25:13.929 } 00:25:13.929 ], 00:25:13.929 "mp_policy": "active_passive" 00:25:13.929 } 00:25:13.929 } 00:25:13.929 ] 00:25:13.929 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.929 07:44:17 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.929 07:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.929 07:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:13.929 07:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.929 07:44:17 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.TUuje6oWEF 00:25:13.929 07:44:17 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:13.929 07:44:17 -- host/async_init.sh@78 -- # nvmftestfini 00:25:13.929 07:44:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:13.929 07:44:17 -- nvmf/common.sh@116 -- # sync 00:25:13.929 07:44:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:13.929 07:44:17 -- nvmf/common.sh@119 -- # set +e 00:25:13.929 07:44:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:13.929 07:44:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:13.929 rmmod nvme_tcp 00:25:13.929 rmmod nvme_fabrics 00:25:13.929 rmmod nvme_keyring 00:25:13.929 07:44:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:13.929 07:44:17 -- nvmf/common.sh@123 -- # set -e 00:25:13.929 07:44:17 -- nvmf/common.sh@124 -- # return 0 00:25:13.929 07:44:17 -- nvmf/common.sh@477 -- # '[' -n 36549 ']' 00:25:13.929 07:44:17 -- nvmf/common.sh@478 -- # killprocess 36549 00:25:13.929 07:44:17 -- common/autotest_common.sh@926 -- # '[' -z 36549 ']' 00:25:13.929 07:44:17 -- common/autotest_common.sh@930 -- # kill -0 36549 00:25:13.929 07:44:17 -- common/autotest_common.sh@931 -- # uname 00:25:13.929 07:44:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:13.929 07:44:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 36549 00:25:13.929 07:44:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:13.929 07:44:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:13.929 07:44:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 36549' 00:25:13.929 killing process with pid 36549 00:25:13.929 07:44:17 -- common/autotest_common.sh@945 -- # kill 36549 00:25:13.929 07:44:17 -- common/autotest_common.sh@950 -- # wait 36549 00:25:14.188 07:44:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:14.188 07:44:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:14.188 07:44:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:14.188 07:44:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:14.188 07:44:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:14.188 07:44:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.188 07:44:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.189 07:44:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.726 07:44:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:16.726 00:25:16.726 real 0m9.343s 00:25:16.726 user 0m3.618s 00:25:16.726 sys 0m4.276s 00:25:16.726 07:44:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:16.726 07:44:20 -- common/autotest_common.sh@10 -- # set +x 00:25:16.726 ************************************ 00:25:16.726 END TEST nvmf_async_init 00:25:16.726 ************************************ 00:25:16.726 07:44:20 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:16.726 07:44:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:16.726 07:44:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:16.726 07:44:20 -- common/autotest_common.sh@10 -- # set +x 00:25:16.726 ************************************ 00:25:16.726 START TEST dma 00:25:16.726 ************************************ 00:25:16.726 07:44:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:16.726 * Looking for test storage... 00:25:16.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:16.726 07:44:20 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.726 07:44:20 -- nvmf/common.sh@7 -- # uname -s 00:25:16.726 07:44:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.726 07:44:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.726 07:44:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.726 07:44:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.726 07:44:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.726 07:44:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.726 07:44:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.726 07:44:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.726 07:44:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.726 07:44:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.726 07:44:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:16.726 07:44:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:16.726 07:44:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.726 07:44:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.726 07:44:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.726 07:44:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.726 07:44:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.726 07:44:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.726 07:44:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.726 07:44:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.726 07:44:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.726 07:44:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.726 07:44:20 -- paths/export.sh@5 -- # export PATH 00:25:16.727 07:44:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.727 07:44:20 -- nvmf/common.sh@46 -- # : 0 00:25:16.727 07:44:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:16.727 07:44:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:16.727 07:44:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:16.727 07:44:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.727 07:44:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.727 07:44:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:16.727 07:44:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:16.727 07:44:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:16.727 07:44:20 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:16.727 07:44:20 -- host/dma.sh@13 -- # exit 0 00:25:16.727 00:25:16.727 real 0m0.116s 00:25:16.727 user 0m0.057s 00:25:16.727 sys 0m0.067s 00:25:16.727 07:44:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:16.727 07:44:20 -- common/autotest_common.sh@10 -- # set +x 00:25:16.727 ************************************ 00:25:16.727 END TEST dma 00:25:16.727 ************************************ 00:25:16.727 07:44:20 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:16.727 07:44:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:16.727 07:44:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:16.727 07:44:20 -- common/autotest_common.sh@10 -- # set +x 00:25:16.727 ************************************ 00:25:16.727 START TEST nvmf_identify 00:25:16.727 ************************************ 00:25:16.727 07:44:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:16.727 * Looking for test storage... 00:25:16.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:16.727 07:44:20 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.727 07:44:20 -- nvmf/common.sh@7 -- # uname -s 00:25:16.727 07:44:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.727 07:44:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.727 07:44:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.727 07:44:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.727 07:44:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.727 07:44:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.727 07:44:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.727 07:44:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.727 07:44:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.727 07:44:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.727 07:44:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:16.727 07:44:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:16.727 07:44:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.727 07:44:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.727 07:44:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.727 07:44:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.727 07:44:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.727 07:44:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.727 07:44:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.727 07:44:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.727 07:44:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.727 07:44:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.727 07:44:20 -- paths/export.sh@5 -- # export PATH 00:25:16.727 07:44:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.727 07:44:20 -- nvmf/common.sh@46 -- # : 0 00:25:16.727 07:44:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:16.727 07:44:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:16.727 07:44:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:16.727 07:44:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.727 07:44:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.727 07:44:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:16.727 07:44:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:16.727 07:44:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:16.727 07:44:20 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:16.727 07:44:20 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:16.727 07:44:20 -- host/identify.sh@14 -- # nvmftestinit 00:25:16.727 07:44:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:16.727 07:44:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.727 07:44:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:16.727 07:44:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:16.727 07:44:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:16.727 07:44:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.727 07:44:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.727 07:44:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.727 07:44:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:16.727 07:44:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:16.727 07:44:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:16.727 07:44:20 -- common/autotest_common.sh@10 -- # set +x 00:25:22.005 07:44:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:22.005 07:44:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:22.005 07:44:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:22.005 07:44:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:22.005 07:44:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:22.005 07:44:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:22.005 07:44:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:22.005 07:44:25 -- nvmf/common.sh@294 -- # net_devs=() 00:25:22.005 07:44:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:22.005 07:44:25 -- nvmf/common.sh@295 -- # e810=() 00:25:22.005 07:44:25 -- nvmf/common.sh@295 -- # local -ga e810 00:25:22.005 07:44:25 -- nvmf/common.sh@296 -- # x722=() 00:25:22.005 07:44:25 -- nvmf/common.sh@296 -- # local -ga x722 00:25:22.005 07:44:25 -- nvmf/common.sh@297 -- # mlx=() 00:25:22.005 07:44:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:22.005 07:44:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.005 07:44:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.005 07:44:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.005 07:44:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.005 07:44:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.005 07:44:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.005 07:44:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.005 07:44:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.005 07:44:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.005 07:44:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.005 07:44:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.005 07:44:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:22.005 07:44:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:22.005 07:44:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:22.005 07:44:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:22.005 07:44:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:22.005 07:44:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:22.005 07:44:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:22.005 07:44:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:22.005 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:22.005 07:44:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:22.005 07:44:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:22.005 07:44:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.005 07:44:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.005 07:44:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:22.005 07:44:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:22.005 07:44:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:22.005 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:22.005 07:44:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:22.005 07:44:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:22.005 07:44:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.005 07:44:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.005 07:44:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:22.005 07:44:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:22.005 07:44:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:22.005 07:44:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:22.005 07:44:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:22.005 07:44:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.005 07:44:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:22.005 07:44:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.005 07:44:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:22.005 Found net devices under 0000:af:00.0: cvl_0_0 00:25:22.005 07:44:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.005 07:44:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:22.005 07:44:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.005 07:44:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:22.006 07:44:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.006 07:44:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:22.006 Found net devices under 0000:af:00.1: cvl_0_1 00:25:22.006 07:44:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.006 07:44:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:22.006 07:44:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:22.006 07:44:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:22.006 07:44:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:22.006 07:44:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:22.006 07:44:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.006 07:44:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.006 07:44:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.006 07:44:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:22.006 07:44:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.006 07:44:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.006 07:44:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:22.006 07:44:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.006 07:44:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.006 07:44:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:22.006 07:44:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:22.006 07:44:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.006 07:44:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.006 07:44:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.006 07:44:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.006 07:44:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:22.006 07:44:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.006 07:44:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.006 07:44:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.265 07:44:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:22.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:25:22.266 00:25:22.266 --- 10.0.0.2 ping statistics --- 00:25:22.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.266 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:25:22.266 07:44:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:25:22.266 00:25:22.266 --- 10.0.0.1 ping statistics --- 00:25:22.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.266 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:25:22.266 07:44:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.266 07:44:25 -- nvmf/common.sh@410 -- # return 0 00:25:22.266 07:44:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:22.266 07:44:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.266 07:44:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:22.266 07:44:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:22.266 07:44:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.266 07:44:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:22.266 07:44:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:22.266 07:44:26 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:22.266 07:44:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:22.266 07:44:26 -- common/autotest_common.sh@10 -- # set +x 00:25:22.266 07:44:26 -- host/identify.sh@19 -- # nvmfpid=40312 00:25:22.266 07:44:26 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:22.266 07:44:26 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:22.266 07:44:26 -- host/identify.sh@23 -- # waitforlisten 40312 00:25:22.266 07:44:26 -- common/autotest_common.sh@819 -- # '[' -z 40312 ']' 00:25:22.266 07:44:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.266 07:44:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:22.266 07:44:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.266 07:44:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:22.266 07:44:26 -- common/autotest_common.sh@10 -- # set +x 00:25:22.266 [2024-10-07 07:44:26.090496] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:22.266 [2024-10-07 07:44:26.090545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.266 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.266 [2024-10-07 07:44:26.151366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:22.266 [2024-10-07 07:44:26.224555] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:22.266 [2024-10-07 07:44:26.224665] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.266 [2024-10-07 07:44:26.224674] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.266 [2024-10-07 07:44:26.224680] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.266 [2024-10-07 07:44:26.224727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.266 [2024-10-07 07:44:26.224826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.266 [2024-10-07 07:44:26.224912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:22.266 [2024-10-07 07:44:26.224913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.207 07:44:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:23.207 07:44:26 -- common/autotest_common.sh@852 -- # return 0 00:25:23.207 07:44:26 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:23.207 07:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.207 07:44:26 -- common/autotest_common.sh@10 -- # set +x 00:25:23.207 [2024-10-07 07:44:26.917274] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.207 07:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.207 07:44:26 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:23.207 07:44:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:23.207 07:44:26 -- common/autotest_common.sh@10 -- # set +x 00:25:23.207 07:44:26 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:23.207 07:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.207 07:44:26 -- common/autotest_common.sh@10 -- # set +x 00:25:23.207 Malloc0 00:25:23.207 07:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.207 07:44:26 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:23.207 07:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.207 07:44:26 -- common/autotest_common.sh@10 -- # set +x 00:25:23.207 07:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.207 07:44:26 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:23.207 07:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.207 07:44:26 -- common/autotest_common.sh@10 -- # set +x 00:25:23.207 07:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.207 07:44:26 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.207 07:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.207 07:44:26 -- common/autotest_common.sh@10 -- # set +x 00:25:23.207 [2024-10-07 07:44:27.000831] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.207 07:44:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.207 07:44:27 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:23.207 07:44:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.207 07:44:27 -- common/autotest_common.sh@10 -- # set +x 00:25:23.207 07:44:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.207 07:44:27 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:23.207 07:44:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.207 07:44:27 -- common/autotest_common.sh@10 -- # set +x 00:25:23.207 [2024-10-07 07:44:27.016625] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:23.207 [ 00:25:23.207 { 00:25:23.207 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:23.207 "subtype": "Discovery", 00:25:23.207 "listen_addresses": [ 00:25:23.207 { 00:25:23.207 "transport": "TCP", 00:25:23.207 "trtype": "TCP", 00:25:23.207 "adrfam": "IPv4", 00:25:23.207 "traddr": "10.0.0.2", 00:25:23.207 "trsvcid": "4420" 00:25:23.207 } 00:25:23.207 ], 00:25:23.207 "allow_any_host": true, 00:25:23.207 "hosts": [] 00:25:23.207 }, 00:25:23.207 { 00:25:23.207 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.207 "subtype": "NVMe", 00:25:23.207 "listen_addresses": [ 00:25:23.207 { 00:25:23.207 "transport": "TCP", 00:25:23.207 "trtype": "TCP", 00:25:23.207 "adrfam": "IPv4", 00:25:23.207 "traddr": "10.0.0.2", 00:25:23.207 "trsvcid": "4420" 00:25:23.207 } 00:25:23.207 ], 00:25:23.207 "allow_any_host": true, 00:25:23.207 "hosts": [], 00:25:23.207 "serial_number": "SPDK00000000000001", 00:25:23.207 "model_number": "SPDK bdev Controller", 00:25:23.207 "max_namespaces": 32, 00:25:23.207 "min_cntlid": 1, 00:25:23.207 "max_cntlid": 65519, 00:25:23.207 "namespaces": [ 00:25:23.207 { 00:25:23.207 "nsid": 1, 00:25:23.207 "bdev_name": "Malloc0", 00:25:23.207 "name": "Malloc0", 00:25:23.207 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:23.207 "eui64": "ABCDEF0123456789", 00:25:23.207 "uuid": "b9844a1c-dd3e-4208-951b-0a63067f9182" 00:25:23.207 } 00:25:23.207 ] 00:25:23.207 } 00:25:23.207 ] 00:25:23.207 07:44:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.207 07:44:27 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:23.207 [2024-10-07 07:44:27.051397] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:23.207 [2024-10-07 07:44:27.051430] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40560 ] 00:25:23.207 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.207 [2024-10-07 07:44:27.077958] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:23.207 [2024-10-07 07:44:27.078005] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:23.207 [2024-10-07 07:44:27.078010] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:23.207 [2024-10-07 07:44:27.078021] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:23.207 [2024-10-07 07:44:27.078027] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:23.207 [2024-10-07 07:44:27.082090] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:23.207 [2024-10-07 07:44:27.082124] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16c0040 0 00:25:23.207 [2024-10-07 07:44:27.090071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:23.207 [2024-10-07 07:44:27.090083] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:23.207 [2024-10-07 07:44:27.090088] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:23.207 [2024-10-07 07:44:27.090091] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:23.207 [2024-10-07 07:44:27.090126] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.207 [2024-10-07 07:44:27.090134] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.207 [2024-10-07 07:44:27.090138] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16c0040) 00:25:23.207 [2024-10-07 07:44:27.090151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:23.207 [2024-10-07 07:44:27.090168] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ad50, cid 0, qid 0 00:25:23.207 [2024-10-07 07:44:27.097068] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.207 [2024-10-07 07:44:27.097076] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.207 [2024-10-07 07:44:27.097079] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.207 [2024-10-07 07:44:27.097083] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172ad50) on tqpair=0x16c0040 00:25:23.207 [2024-10-07 07:44:27.097093] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:23.207 [2024-10-07 07:44:27.097099] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:23.207 [2024-10-07 07:44:27.097104] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:23.207 [2024-10-07 07:44:27.097115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097122] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16c0040) 00:25:23.208 [2024-10-07 07:44:27.097128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.208 [2024-10-07 07:44:27.097141] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ad50, cid 0, qid 0 00:25:23.208 [2024-10-07 07:44:27.097250] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.208 [2024-10-07 07:44:27.097256] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.208 [2024-10-07 07:44:27.097260] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097263] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172ad50) on tqpair=0x16c0040 00:25:23.208 [2024-10-07 07:44:27.097269] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:23.208 [2024-10-07 07:44:27.097275] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:23.208 [2024-10-07 07:44:27.097281] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097285] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097288] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16c0040) 00:25:23.208 [2024-10-07 07:44:27.097294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.208 [2024-10-07 07:44:27.097304] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ad50, cid 0, qid 0 00:25:23.208 [2024-10-07 07:44:27.097380] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.208 [2024-10-07 07:44:27.097386] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.208 [2024-10-07 07:44:27.097389] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097393] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172ad50) on tqpair=0x16c0040 00:25:23.208 [2024-10-07 07:44:27.097398] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:23.208 [2024-10-07 07:44:27.097405] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:23.208 [2024-10-07 07:44:27.097411] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097417] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097420] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16c0040) 00:25:23.208 [2024-10-07 07:44:27.097426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.208 [2024-10-07 07:44:27.097436] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ad50, cid 0, qid 0 00:25:23.208 [2024-10-07 07:44:27.097510] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.208 [2024-10-07 07:44:27.097516] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.208 [2024-10-07 07:44:27.097519] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097522] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172ad50) on tqpair=0x16c0040 00:25:23.208 [2024-10-07 07:44:27.097527] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:23.208 [2024-10-07 07:44:27.097535] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097538] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097541] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16c0040) 00:25:23.208 [2024-10-07 07:44:27.097547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.208 [2024-10-07 07:44:27.097557] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ad50, cid 0, qid 0 00:25:23.208 [2024-10-07 07:44:27.097635] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.208 [2024-10-07 07:44:27.097641] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.208 [2024-10-07 07:44:27.097644] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097647] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172ad50) on tqpair=0x16c0040 00:25:23.208 [2024-10-07 07:44:27.097652] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:23.208 [2024-10-07 07:44:27.097656] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:23.208 [2024-10-07 07:44:27.097663] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:23.208 [2024-10-07 07:44:27.097768] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:23.208 [2024-10-07 07:44:27.097772] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:23.208 [2024-10-07 07:44:27.097779] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097783] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097786] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16c0040) 00:25:23.208 [2024-10-07 07:44:27.097791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.208 [2024-10-07 07:44:27.097801] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ad50, cid 0, qid 0 00:25:23.208 [2024-10-07 07:44:27.097880] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.208 [2024-10-07 07:44:27.097886] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.208 [2024-10-07 07:44:27.097889] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097892] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172ad50) on tqpair=0x16c0040 00:25:23.208 [2024-10-07 07:44:27.097897] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:23.208 [2024-10-07 07:44:27.097906] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097910] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.097913] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16c0040) 00:25:23.208 [2024-10-07 07:44:27.097919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.208 [2024-10-07 07:44:27.097928] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ad50, cid 0, qid 0 00:25:23.208 [2024-10-07 07:44:27.098004] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.208 [2024-10-07 07:44:27.098010] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.208 [2024-10-07 07:44:27.098013] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.098016] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172ad50) on tqpair=0x16c0040 00:25:23.208 [2024-10-07 07:44:27.098020] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:23.208 [2024-10-07 07:44:27.098024] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:23.208 [2024-10-07 07:44:27.098031] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:23.208 [2024-10-07 07:44:27.098039] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:23.208 [2024-10-07 07:44:27.098047] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.098050] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.098053] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16c0040) 00:25:23.208 [2024-10-07 07:44:27.098066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.208 [2024-10-07 07:44:27.098077] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ad50, cid 0, qid 0 00:25:23.208 [2024-10-07 07:44:27.098205] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:23.208 [2024-10-07 07:44:27.098211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:23.208 [2024-10-07 07:44:27.098215] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.098218] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16c0040): datao=0, datal=4096, cccid=0 00:25:23.208 [2024-10-07 07:44:27.098222] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x172ad50) on tqpair(0x16c0040): expected_datao=0, payload_size=4096 00:25:23.208 [2024-10-07 07:44:27.098230] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.098234] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.098260] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.208 [2024-10-07 07:44:27.098266] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.208 [2024-10-07 07:44:27.098269] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.098272] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172ad50) on tqpair=0x16c0040 00:25:23.208 [2024-10-07 07:44:27.098279] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:23.208 [2024-10-07 07:44:27.098284] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:23.208 [2024-10-07 07:44:27.098287] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:23.208 [2024-10-07 07:44:27.098292] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:23.208 [2024-10-07 07:44:27.098298] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:23.208 [2024-10-07 07:44:27.098302] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:23.208 [2024-10-07 07:44:27.098314] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:23.208 [2024-10-07 07:44:27.098320] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.098324] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.208 [2024-10-07 07:44:27.098327] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16c0040) 00:25:23.208 [2024-10-07 07:44:27.098333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:23.208 [2024-10-07 07:44:27.098344] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ad50, cid 0, qid 0 00:25:23.208 [2024-10-07 07:44:27.098428] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.208 [2024-10-07 07:44:27.098433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.209 [2024-10-07 07:44:27.098436] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098440] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172ad50) on tqpair=0x16c0040 00:25:23.209 [2024-10-07 07:44:27.098448] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098451] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098454] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16c0040) 00:25:23.209 [2024-10-07 07:44:27.098459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.209 [2024-10-07 07:44:27.098464] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098467] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098470] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16c0040) 00:25:23.209 [2024-10-07 07:44:27.098475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.209 [2024-10-07 07:44:27.098480] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098484] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098487] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16c0040) 00:25:23.209 [2024-10-07 07:44:27.098492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.209 [2024-10-07 07:44:27.098497] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098499] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098502] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.209 [2024-10-07 07:44:27.098507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.209 [2024-10-07 07:44:27.098511] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:23.209 [2024-10-07 07:44:27.098522] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:23.209 [2024-10-07 07:44:27.098527] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098530] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098533] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16c0040) 00:25:23.209 [2024-10-07 07:44:27.098541] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.209 [2024-10-07 07:44:27.098552] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ad50, cid 0, qid 0 00:25:23.209 [2024-10-07 07:44:27.098556] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172aeb0, cid 1, qid 0 00:25:23.209 [2024-10-07 07:44:27.098560] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b010, cid 2, qid 0 00:25:23.209 [2024-10-07 07:44:27.098564] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.209 [2024-10-07 07:44:27.098568] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b2d0, cid 4, qid 0 00:25:23.209 [2024-10-07 07:44:27.098687] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.209 [2024-10-07 07:44:27.098693] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.209 [2024-10-07 07:44:27.098696] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098700] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b2d0) on tqpair=0x16c0040 00:25:23.209 [2024-10-07 07:44:27.098705] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:23.209 [2024-10-07 07:44:27.098710] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:23.209 [2024-10-07 07:44:27.098719] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098722] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098725] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16c0040) 00:25:23.209 [2024-10-07 07:44:27.098731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.209 [2024-10-07 07:44:27.098740] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b2d0, cid 4, qid 0 00:25:23.209 [2024-10-07 07:44:27.098825] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:23.209 [2024-10-07 07:44:27.098830] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:23.209 [2024-10-07 07:44:27.098833] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098836] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16c0040): datao=0, datal=4096, cccid=4 00:25:23.209 [2024-10-07 07:44:27.098840] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x172b2d0) on tqpair(0x16c0040): expected_datao=0, payload_size=4096 00:25:23.209 [2024-10-07 07:44:27.098868] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.098872] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.139138] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.209 [2024-10-07 07:44:27.139150] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.209 [2024-10-07 07:44:27.139154] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.139157] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b2d0) on tqpair=0x16c0040 00:25:23.209 [2024-10-07 07:44:27.139171] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:23.209 [2024-10-07 07:44:27.139195] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.139199] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.139203] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16c0040) 00:25:23.209 [2024-10-07 07:44:27.139209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.209 [2024-10-07 07:44:27.139215] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.139221] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.139224] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16c0040) 00:25:23.209 [2024-10-07 07:44:27.139229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.209 [2024-10-07 07:44:27.139244] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b2d0, cid 4, qid 0 00:25:23.209 [2024-10-07 07:44:27.139249] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b430, cid 5, qid 0 00:25:23.209 [2024-10-07 07:44:27.139364] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:23.209 [2024-10-07 07:44:27.139370] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:23.209 [2024-10-07 07:44:27.139373] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.139376] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16c0040): datao=0, datal=1024, cccid=4 00:25:23.209 [2024-10-07 07:44:27.139380] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x172b2d0) on tqpair(0x16c0040): expected_datao=0, payload_size=1024 00:25:23.209 [2024-10-07 07:44:27.139386] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.139389] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.139394] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.209 [2024-10-07 07:44:27.139399] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.209 [2024-10-07 07:44:27.139402] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.209 [2024-10-07 07:44:27.139405] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b430) on tqpair=0x16c0040 00:25:23.472 [2024-10-07 07:44:27.180157] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.472 [2024-10-07 07:44:27.180171] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.472 [2024-10-07 07:44:27.180174] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.472 [2024-10-07 07:44:27.180178] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b2d0) on tqpair=0x16c0040 00:25:23.472 [2024-10-07 07:44:27.180191] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.472 [2024-10-07 07:44:27.180195] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.472 [2024-10-07 07:44:27.180198] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16c0040) 00:25:23.472 [2024-10-07 07:44:27.180204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.472 [2024-10-07 07:44:27.180221] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b2d0, cid 4, qid 0 00:25:23.472 [2024-10-07 07:44:27.180309] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:23.472 [2024-10-07 07:44:27.180316] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:23.472 [2024-10-07 07:44:27.180321] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:23.473 [2024-10-07 07:44:27.180324] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16c0040): datao=0, datal=3072, cccid=4 00:25:23.473 [2024-10-07 07:44:27.180328] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x172b2d0) on tqpair(0x16c0040): expected_datao=0, payload_size=3072 00:25:23.473 [2024-10-07 07:44:27.180334] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:23.473 [2024-10-07 07:44:27.180337] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:23.473 [2024-10-07 07:44:27.180367] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.473 [2024-10-07 07:44:27.180373] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.473 [2024-10-07 07:44:27.180376] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.473 [2024-10-07 07:44:27.180379] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b2d0) on tqpair=0x16c0040 00:25:23.473 [2024-10-07 07:44:27.180388] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.473 [2024-10-07 07:44:27.180394] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.473 [2024-10-07 07:44:27.180398] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16c0040) 00:25:23.473 [2024-10-07 07:44:27.180403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.473 [2024-10-07 07:44:27.180417] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b2d0, cid 4, qid 0 00:25:23.473 [2024-10-07 07:44:27.180509] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:23.473 [2024-10-07 07:44:27.180515] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:23.473 [2024-10-07 07:44:27.180518] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:23.473 [2024-10-07 07:44:27.180521] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16c0040): datao=0, datal=8, cccid=4 00:25:23.473 [2024-10-07 07:44:27.180525] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x172b2d0) on tqpair(0x16c0040): expected_datao=0, payload_size=8 00:25:23.473 [2024-10-07 07:44:27.180531] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:23.473 [2024-10-07 07:44:27.180534] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:23.473 [2024-10-07 07:44:27.225067] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.473 [2024-10-07 07:44:27.225076] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.473 [2024-10-07 07:44:27.225080] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.473 [2024-10-07 07:44:27.225083] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b2d0) on tqpair=0x16c0040 00:25:23.473 ===================================================== 00:25:23.473 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:23.473 ===================================================== 00:25:23.473 Controller Capabilities/Features 00:25:23.473 ================================ 00:25:23.473 Vendor ID: 0000 00:25:23.473 Subsystem Vendor ID: 0000 00:25:23.473 Serial Number: .................... 00:25:23.473 Model Number: ........................................ 00:25:23.473 Firmware Version: 24.01.1 00:25:23.473 Recommended Arb Burst: 0 00:25:23.473 IEEE OUI Identifier: 00 00 00 00:25:23.473 Multi-path I/O 00:25:23.473 May have multiple subsystem ports: No 00:25:23.473 May have multiple controllers: No 00:25:23.473 Associated with SR-IOV VF: No 00:25:23.473 Max Data Transfer Size: 131072 00:25:23.473 Max Number of Namespaces: 0 00:25:23.473 Max Number of I/O Queues: 1024 00:25:23.473 NVMe Specification Version (VS): 1.3 00:25:23.473 NVMe Specification Version (Identify): 1.3 00:25:23.473 Maximum Queue Entries: 128 00:25:23.473 Contiguous Queues Required: Yes 00:25:23.473 Arbitration Mechanisms Supported 00:25:23.473 Weighted Round Robin: Not Supported 00:25:23.473 Vendor Specific: Not Supported 00:25:23.473 Reset Timeout: 15000 ms 00:25:23.473 Doorbell Stride: 4 bytes 00:25:23.473 NVM Subsystem Reset: Not Supported 00:25:23.473 Command Sets Supported 00:25:23.473 NVM Command Set: Supported 00:25:23.473 Boot Partition: Not Supported 00:25:23.473 Memory Page Size Minimum: 4096 bytes 00:25:23.473 Memory Page Size Maximum: 4096 bytes 00:25:23.473 Persistent Memory Region: Not Supported 00:25:23.473 Optional Asynchronous Events Supported 00:25:23.473 Namespace Attribute Notices: Not Supported 00:25:23.473 Firmware Activation Notices: Not Supported 00:25:23.473 ANA Change Notices: Not Supported 00:25:23.473 PLE Aggregate Log Change Notices: Not Supported 00:25:23.473 LBA Status Info Alert Notices: Not Supported 00:25:23.473 EGE Aggregate Log Change Notices: Not Supported 00:25:23.473 Normal NVM Subsystem Shutdown event: Not Supported 00:25:23.473 Zone Descriptor Change Notices: Not Supported 00:25:23.473 Discovery Log Change Notices: Supported 00:25:23.473 Controller Attributes 00:25:23.473 128-bit Host Identifier: Not Supported 00:25:23.473 Non-Operational Permissive Mode: Not Supported 00:25:23.473 NVM Sets: Not Supported 00:25:23.473 Read Recovery Levels: Not Supported 00:25:23.473 Endurance Groups: Not Supported 00:25:23.473 Predictable Latency Mode: Not Supported 00:25:23.473 Traffic Based Keep ALive: Not Supported 00:25:23.473 Namespace Granularity: Not Supported 00:25:23.473 SQ Associations: Not Supported 00:25:23.473 UUID List: Not Supported 00:25:23.473 Multi-Domain Subsystem: Not Supported 00:25:23.473 Fixed Capacity Management: Not Supported 00:25:23.473 Variable Capacity Management: Not Supported 00:25:23.473 Delete Endurance Group: Not Supported 00:25:23.473 Delete NVM Set: Not Supported 00:25:23.473 Extended LBA Formats Supported: Not Supported 00:25:23.473 Flexible Data Placement Supported: Not Supported 00:25:23.473 00:25:23.473 Controller Memory Buffer Support 00:25:23.473 ================================ 00:25:23.473 Supported: No 00:25:23.473 00:25:23.473 Persistent Memory Region Support 00:25:23.473 ================================ 00:25:23.473 Supported: No 00:25:23.473 00:25:23.473 Admin Command Set Attributes 00:25:23.473 ============================ 00:25:23.473 Security Send/Receive: Not Supported 00:25:23.473 Format NVM: Not Supported 00:25:23.473 Firmware Activate/Download: Not Supported 00:25:23.473 Namespace Management: Not Supported 00:25:23.473 Device Self-Test: Not Supported 00:25:23.473 Directives: Not Supported 00:25:23.473 NVMe-MI: Not Supported 00:25:23.473 Virtualization Management: Not Supported 00:25:23.473 Doorbell Buffer Config: Not Supported 00:25:23.473 Get LBA Status Capability: Not Supported 00:25:23.473 Command & Feature Lockdown Capability: Not Supported 00:25:23.473 Abort Command Limit: 1 00:25:23.473 Async Event Request Limit: 4 00:25:23.473 Number of Firmware Slots: N/A 00:25:23.473 Firmware Slot 1 Read-Only: N/A 00:25:23.473 Firmware Activation Without Reset: N/A 00:25:23.473 Multiple Update Detection Support: N/A 00:25:23.473 Firmware Update Granularity: No Information Provided 00:25:23.473 Per-Namespace SMART Log: No 00:25:23.473 Asymmetric Namespace Access Log Page: Not Supported 00:25:23.473 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:23.473 Command Effects Log Page: Not Supported 00:25:23.473 Get Log Page Extended Data: Supported 00:25:23.473 Telemetry Log Pages: Not Supported 00:25:23.473 Persistent Event Log Pages: Not Supported 00:25:23.473 Supported Log Pages Log Page: May Support 00:25:23.473 Commands Supported & Effects Log Page: Not Supported 00:25:23.473 Feature Identifiers & Effects Log Page:May Support 00:25:23.473 NVMe-MI Commands & Effects Log Page: May Support 00:25:23.473 Data Area 4 for Telemetry Log: Not Supported 00:25:23.473 Error Log Page Entries Supported: 128 00:25:23.473 Keep Alive: Not Supported 00:25:23.473 00:25:23.473 NVM Command Set Attributes 00:25:23.473 ========================== 00:25:23.473 Submission Queue Entry Size 00:25:23.473 Max: 1 00:25:23.473 Min: 1 00:25:23.473 Completion Queue Entry Size 00:25:23.473 Max: 1 00:25:23.473 Min: 1 00:25:23.473 Number of Namespaces: 0 00:25:23.473 Compare Command: Not Supported 00:25:23.473 Write Uncorrectable Command: Not Supported 00:25:23.473 Dataset Management Command: Not Supported 00:25:23.473 Write Zeroes Command: Not Supported 00:25:23.473 Set Features Save Field: Not Supported 00:25:23.473 Reservations: Not Supported 00:25:23.473 Timestamp: Not Supported 00:25:23.473 Copy: Not Supported 00:25:23.473 Volatile Write Cache: Not Present 00:25:23.473 Atomic Write Unit (Normal): 1 00:25:23.473 Atomic Write Unit (PFail): 1 00:25:23.473 Atomic Compare & Write Unit: 1 00:25:23.473 Fused Compare & Write: Supported 00:25:23.473 Scatter-Gather List 00:25:23.473 SGL Command Set: Supported 00:25:23.473 SGL Keyed: Supported 00:25:23.473 SGL Bit Bucket Descriptor: Not Supported 00:25:23.473 SGL Metadata Pointer: Not Supported 00:25:23.473 Oversized SGL: Not Supported 00:25:23.473 SGL Metadata Address: Not Supported 00:25:23.473 SGL Offset: Supported 00:25:23.473 Transport SGL Data Block: Not Supported 00:25:23.473 Replay Protected Memory Block: Not Supported 00:25:23.473 00:25:23.473 Firmware Slot Information 00:25:23.473 ========================= 00:25:23.473 Active slot: 0 00:25:23.473 00:25:23.473 00:25:23.473 Error Log 00:25:23.473 ========= 00:25:23.473 00:25:23.473 Active Namespaces 00:25:23.473 ================= 00:25:23.473 Discovery Log Page 00:25:23.473 ================== 00:25:23.473 Generation Counter: 2 00:25:23.473 Number of Records: 2 00:25:23.473 Record Format: 0 00:25:23.474 00:25:23.474 Discovery Log Entry 0 00:25:23.474 ---------------------- 00:25:23.474 Transport Type: 3 (TCP) 00:25:23.474 Address Family: 1 (IPv4) 00:25:23.474 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:23.474 Entry Flags: 00:25:23.474 Duplicate Returned Information: 1 00:25:23.474 Explicit Persistent Connection Support for Discovery: 1 00:25:23.474 Transport Requirements: 00:25:23.474 Secure Channel: Not Required 00:25:23.474 Port ID: 0 (0x0000) 00:25:23.474 Controller ID: 65535 (0xffff) 00:25:23.474 Admin Max SQ Size: 128 00:25:23.474 Transport Service Identifier: 4420 00:25:23.474 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:23.474 Transport Address: 10.0.0.2 00:25:23.474 Discovery Log Entry 1 00:25:23.474 ---------------------- 00:25:23.474 Transport Type: 3 (TCP) 00:25:23.474 Address Family: 1 (IPv4) 00:25:23.474 Subsystem Type: 2 (NVM Subsystem) 00:25:23.474 Entry Flags: 00:25:23.474 Duplicate Returned Information: 0 00:25:23.474 Explicit Persistent Connection Support for Discovery: 0 00:25:23.474 Transport Requirements: 00:25:23.474 Secure Channel: Not Required 00:25:23.474 Port ID: 0 (0x0000) 00:25:23.474 Controller ID: 65535 (0xffff) 00:25:23.474 Admin Max SQ Size: 128 00:25:23.474 Transport Service Identifier: 4420 00:25:23.474 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:23.474 Transport Address: 10.0.0.2 [2024-10-07 07:44:27.225165] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:23.474 [2024-10-07 07:44:27.225178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.474 [2024-10-07 07:44:27.225185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.474 [2024-10-07 07:44:27.225190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.474 [2024-10-07 07:44:27.225195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.474 [2024-10-07 07:44:27.225202] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225206] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225209] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.474 [2024-10-07 07:44:27.225215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.474 [2024-10-07 07:44:27.225229] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.474 [2024-10-07 07:44:27.225306] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.474 [2024-10-07 07:44:27.225312] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.474 [2024-10-07 07:44:27.225315] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225319] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.474 [2024-10-07 07:44:27.225326] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225329] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225332] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.474 [2024-10-07 07:44:27.225338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.474 [2024-10-07 07:44:27.225352] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.474 [2024-10-07 07:44:27.225440] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.474 [2024-10-07 07:44:27.225446] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.474 [2024-10-07 07:44:27.225449] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225452] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.474 [2024-10-07 07:44:27.225457] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:23.474 [2024-10-07 07:44:27.225461] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:23.474 [2024-10-07 07:44:27.225469] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225473] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225476] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.474 [2024-10-07 07:44:27.225481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.474 [2024-10-07 07:44:27.225491] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.474 [2024-10-07 07:44:27.225568] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.474 [2024-10-07 07:44:27.225573] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.474 [2024-10-07 07:44:27.225577] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225580] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.474 [2024-10-07 07:44:27.225589] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225592] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225596] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.474 [2024-10-07 07:44:27.225601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.474 [2024-10-07 07:44:27.225611] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.474 [2024-10-07 07:44:27.225687] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.474 [2024-10-07 07:44:27.225693] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.474 [2024-10-07 07:44:27.225696] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225699] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.474 [2024-10-07 07:44:27.225707] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225711] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225714] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.474 [2024-10-07 07:44:27.225719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.474 [2024-10-07 07:44:27.225729] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.474 [2024-10-07 07:44:27.225807] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.474 [2024-10-07 07:44:27.225812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.474 [2024-10-07 07:44:27.225815] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225818] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.474 [2024-10-07 07:44:27.225827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225830] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225833] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.474 [2024-10-07 07:44:27.225841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.474 [2024-10-07 07:44:27.225850] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.474 [2024-10-07 07:44:27.225931] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.474 [2024-10-07 07:44:27.225937] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.474 [2024-10-07 07:44:27.225940] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225943] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.474 [2024-10-07 07:44:27.225951] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225955] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.225958] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.474 [2024-10-07 07:44:27.225963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.474 [2024-10-07 07:44:27.225972] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.474 [2024-10-07 07:44:27.226047] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.474 [2024-10-07 07:44:27.226052] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.474 [2024-10-07 07:44:27.226055] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.226065] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.474 [2024-10-07 07:44:27.226074] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.226077] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.226080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.474 [2024-10-07 07:44:27.226086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.474 [2024-10-07 07:44:27.226096] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.474 [2024-10-07 07:44:27.226175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.474 [2024-10-07 07:44:27.226180] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.474 [2024-10-07 07:44:27.226183] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.226187] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.474 [2024-10-07 07:44:27.226195] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.226199] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.474 [2024-10-07 07:44:27.226202] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.474 [2024-10-07 07:44:27.226207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.475 [2024-10-07 07:44:27.226217] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.475 [2024-10-07 07:44:27.226292] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.475 [2024-10-07 07:44:27.226297] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.475 [2024-10-07 07:44:27.226300] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226304] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.475 [2024-10-07 07:44:27.226312] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226316] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226319] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.475 [2024-10-07 07:44:27.226324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.475 [2024-10-07 07:44:27.226335] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.475 [2024-10-07 07:44:27.226409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.475 [2024-10-07 07:44:27.226415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.475 [2024-10-07 07:44:27.226418] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226421] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.475 [2024-10-07 07:44:27.226429] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226433] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226436] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.475 [2024-10-07 07:44:27.226441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.475 [2024-10-07 07:44:27.226450] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.475 [2024-10-07 07:44:27.226535] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.475 [2024-10-07 07:44:27.226541] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.475 [2024-10-07 07:44:27.226544] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226547] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.475 [2024-10-07 07:44:27.226556] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226560] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226563] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.475 [2024-10-07 07:44:27.226568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.475 [2024-10-07 07:44:27.226578] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.475 [2024-10-07 07:44:27.226656] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.475 [2024-10-07 07:44:27.226662] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.475 [2024-10-07 07:44:27.226665] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226668] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.475 [2024-10-07 07:44:27.226677] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226680] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226683] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.475 [2024-10-07 07:44:27.226689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.475 [2024-10-07 07:44:27.226698] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.475 [2024-10-07 07:44:27.226781] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.475 [2024-10-07 07:44:27.226786] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.475 [2024-10-07 07:44:27.226789] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226792] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.475 [2024-10-07 07:44:27.226801] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226804] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226807] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.475 [2024-10-07 07:44:27.226813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.475 [2024-10-07 07:44:27.226822] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.475 [2024-10-07 07:44:27.226898] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.475 [2024-10-07 07:44:27.226903] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.475 [2024-10-07 07:44:27.226906] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226910] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.475 [2024-10-07 07:44:27.226918] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226922] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.226925] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.475 [2024-10-07 07:44:27.226930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.475 [2024-10-07 07:44:27.226940] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.475 [2024-10-07 07:44:27.227018] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.475 [2024-10-07 07:44:27.227023] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.475 [2024-10-07 07:44:27.227026] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227030] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.475 [2024-10-07 07:44:27.227038] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227041] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227044] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.475 [2024-10-07 07:44:27.227050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.475 [2024-10-07 07:44:27.227065] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.475 [2024-10-07 07:44:27.227134] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.475 [2024-10-07 07:44:27.227140] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.475 [2024-10-07 07:44:27.227143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227146] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.475 [2024-10-07 07:44:27.227154] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227158] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227161] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.475 [2024-10-07 07:44:27.227167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.475 [2024-10-07 07:44:27.227176] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.475 [2024-10-07 07:44:27.227257] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.475 [2024-10-07 07:44:27.227262] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.475 [2024-10-07 07:44:27.227265] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227268] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.475 [2024-10-07 07:44:27.227277] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227280] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227283] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.475 [2024-10-07 07:44:27.227289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.475 [2024-10-07 07:44:27.227298] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.475 [2024-10-07 07:44:27.227377] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.475 [2024-10-07 07:44:27.227383] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.475 [2024-10-07 07:44:27.227386] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227389] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.475 [2024-10-07 07:44:27.227397] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227401] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227404] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.475 [2024-10-07 07:44:27.227409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.475 [2024-10-07 07:44:27.227419] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.475 [2024-10-07 07:44:27.227494] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.475 [2024-10-07 07:44:27.227500] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.475 [2024-10-07 07:44:27.227503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227506] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.475 [2024-10-07 07:44:27.227514] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227518] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227521] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.475 [2024-10-07 07:44:27.227526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.475 [2024-10-07 07:44:27.227536] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.475 [2024-10-07 07:44:27.227613] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.475 [2024-10-07 07:44:27.227619] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.475 [2024-10-07 07:44:27.227621] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227625] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.475 [2024-10-07 07:44:27.227633] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227637] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.475 [2024-10-07 07:44:27.227639] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.476 [2024-10-07 07:44:27.227645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-10-07 07:44:27.227654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.476 [2024-10-07 07:44:27.227734] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.476 [2024-10-07 07:44:27.227740] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.476 [2024-10-07 07:44:27.227743] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.227746] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.476 [2024-10-07 07:44:27.227754] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.227758] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.227761] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.476 [2024-10-07 07:44:27.227766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-10-07 07:44:27.227775] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.476 [2024-10-07 07:44:27.227850] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.476 [2024-10-07 07:44:27.227857] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.476 [2024-10-07 07:44:27.227860] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.227863] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.476 [2024-10-07 07:44:27.227872] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.227875] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.227878] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.476 [2024-10-07 07:44:27.227884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-10-07 07:44:27.227893] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.476 [2024-10-07 07:44:27.227965] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.476 [2024-10-07 07:44:27.227970] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.476 [2024-10-07 07:44:27.227973] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.227977] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.476 [2024-10-07 07:44:27.227985] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.227988] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.227992] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.476 [2024-10-07 07:44:27.227997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-10-07 07:44:27.228006] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.476 [2024-10-07 07:44:27.228088] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.476 [2024-10-07 07:44:27.228094] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.476 [2024-10-07 07:44:27.228097] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228100] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.476 [2024-10-07 07:44:27.228109] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228112] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228115] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.476 [2024-10-07 07:44:27.228121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-10-07 07:44:27.228131] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.476 [2024-10-07 07:44:27.228211] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.476 [2024-10-07 07:44:27.228216] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.476 [2024-10-07 07:44:27.228219] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228222] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.476 [2024-10-07 07:44:27.228231] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228234] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228237] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.476 [2024-10-07 07:44:27.228243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-10-07 07:44:27.228252] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.476 [2024-10-07 07:44:27.228325] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.476 [2024-10-07 07:44:27.228330] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.476 [2024-10-07 07:44:27.228335] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228338] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.476 [2024-10-07 07:44:27.228347] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228351] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228354] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.476 [2024-10-07 07:44:27.228359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-10-07 07:44:27.228368] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.476 [2024-10-07 07:44:27.228446] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.476 [2024-10-07 07:44:27.228451] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.476 [2024-10-07 07:44:27.228454] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228457] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.476 [2024-10-07 07:44:27.228466] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228469] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228472] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.476 [2024-10-07 07:44:27.228478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-10-07 07:44:27.228487] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.476 [2024-10-07 07:44:27.228563] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.476 [2024-10-07 07:44:27.228568] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.476 [2024-10-07 07:44:27.228571] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228574] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.476 [2024-10-07 07:44:27.228583] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228586] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228589] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.476 [2024-10-07 07:44:27.228595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-10-07 07:44:27.228604] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.476 [2024-10-07 07:44:27.228684] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.476 [2024-10-07 07:44:27.228689] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.476 [2024-10-07 07:44:27.228692] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228695] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.476 [2024-10-07 07:44:27.228704] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228707] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228710] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.476 [2024-10-07 07:44:27.228715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-10-07 07:44:27.228724] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.476 [2024-10-07 07:44:27.228802] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.476 [2024-10-07 07:44:27.228808] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.476 [2024-10-07 07:44:27.228811] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228816] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.476 [2024-10-07 07:44:27.228824] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228828] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228831] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.476 [2024-10-07 07:44:27.228836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-10-07 07:44:27.228845] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.476 [2024-10-07 07:44:27.228927] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.476 [2024-10-07 07:44:27.228932] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.476 [2024-10-07 07:44:27.228935] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228938] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.476 [2024-10-07 07:44:27.228947] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228950] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.228953] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.476 [2024-10-07 07:44:27.228959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.476 [2024-10-07 07:44:27.228968] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.476 [2024-10-07 07:44:27.229047] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.476 [2024-10-07 07:44:27.229053] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.476 [2024-10-07 07:44:27.229056] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.230185] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.476 [2024-10-07 07:44:27.230199] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.476 [2024-10-07 07:44:27.230203] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.230206] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16c0040) 00:25:23.477 [2024-10-07 07:44:27.230212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-10-07 07:44:27.230224] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172b170, cid 3, qid 0 00:25:23.477 [2024-10-07 07:44:27.230524] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.477 [2024-10-07 07:44:27.230529] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.477 [2024-10-07 07:44:27.230532] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.230535] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x172b170) on tqpair=0x16c0040 00:25:23.477 [2024-10-07 07:44:27.230543] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:25:23.477 00:25:23.477 07:44:27 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:23.477 [2024-10-07 07:44:27.264466] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:23.477 [2024-10-07 07:44:27.264500] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40563 ] 00:25:23.477 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.477 [2024-10-07 07:44:27.288047] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:23.477 [2024-10-07 07:44:27.292091] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:23.477 [2024-10-07 07:44:27.292097] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:23.477 [2024-10-07 07:44:27.292106] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:23.477 [2024-10-07 07:44:27.292112] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:23.477 [2024-10-07 07:44:27.292451] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:23.477 [2024-10-07 07:44:27.292476] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd51040 0 00:25:23.477 [2024-10-07 07:44:27.307069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:23.477 [2024-10-07 07:44:27.307083] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:23.477 [2024-10-07 07:44:27.307087] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:23.477 [2024-10-07 07:44:27.307090] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:23.477 [2024-10-07 07:44:27.307118] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.307123] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.307126] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd51040) 00:25:23.477 [2024-10-07 07:44:27.307136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:23.477 [2024-10-07 07:44:27.307152] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbbd50, cid 0, qid 0 00:25:23.477 [2024-10-07 07:44:27.315070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.477 [2024-10-07 07:44:27.315078] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.477 [2024-10-07 07:44:27.315081] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.315084] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbbd50) on tqpair=0xd51040 00:25:23.477 [2024-10-07 07:44:27.315095] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:23.477 [2024-10-07 07:44:27.315100] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:23.477 [2024-10-07 07:44:27.315105] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:23.477 [2024-10-07 07:44:27.315114] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.315118] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.315121] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd51040) 00:25:23.477 [2024-10-07 07:44:27.315127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-10-07 07:44:27.315140] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbbd50, cid 0, qid 0 00:25:23.477 [2024-10-07 07:44:27.315288] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.477 [2024-10-07 07:44:27.315294] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.477 [2024-10-07 07:44:27.315297] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.315300] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbbd50) on tqpair=0xd51040 00:25:23.477 [2024-10-07 07:44:27.315305] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:23.477 [2024-10-07 07:44:27.315311] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:23.477 [2024-10-07 07:44:27.315320] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.315323] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.315326] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd51040) 00:25:23.477 [2024-10-07 07:44:27.315332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-10-07 07:44:27.315343] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbbd50, cid 0, qid 0 00:25:23.477 [2024-10-07 07:44:27.315421] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.477 [2024-10-07 07:44:27.315426] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.477 [2024-10-07 07:44:27.315429] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.315432] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbbd50) on tqpair=0xd51040 00:25:23.477 [2024-10-07 07:44:27.315436] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:23.477 [2024-10-07 07:44:27.315443] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:23.477 [2024-10-07 07:44:27.315449] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.315452] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.315456] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd51040) 00:25:23.477 [2024-10-07 07:44:27.315461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-10-07 07:44:27.315471] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbbd50, cid 0, qid 0 00:25:23.477 [2024-10-07 07:44:27.315555] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.477 [2024-10-07 07:44:27.315560] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.477 [2024-10-07 07:44:27.315563] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.315566] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbbd50) on tqpair=0xd51040 00:25:23.477 [2024-10-07 07:44:27.315570] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:23.477 [2024-10-07 07:44:27.315578] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.315581] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.315585] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd51040) 00:25:23.477 [2024-10-07 07:44:27.315590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.477 [2024-10-07 07:44:27.315599] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbbd50, cid 0, qid 0 00:25:23.477 [2024-10-07 07:44:27.315677] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.477 [2024-10-07 07:44:27.315683] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.477 [2024-10-07 07:44:27.315686] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.477 [2024-10-07 07:44:27.315689] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbbd50) on tqpair=0xd51040 00:25:23.477 [2024-10-07 07:44:27.315692] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:23.477 [2024-10-07 07:44:27.315697] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:23.477 [2024-10-07 07:44:27.315703] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:23.477 [2024-10-07 07:44:27.315808] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:23.477 [2024-10-07 07:44:27.315813] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:23.477 [2024-10-07 07:44:27.315820] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.315823] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.315826] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd51040) 00:25:23.478 [2024-10-07 07:44:27.315832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.478 [2024-10-07 07:44:27.315841] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbbd50, cid 0, qid 0 00:25:23.478 [2024-10-07 07:44:27.315920] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.478 [2024-10-07 07:44:27.315926] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.478 [2024-10-07 07:44:27.315928] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.315931] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbbd50) on tqpair=0xd51040 00:25:23.478 [2024-10-07 07:44:27.315935] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:23.478 [2024-10-07 07:44:27.315944] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.315947] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.315950] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd51040) 00:25:23.478 [2024-10-07 07:44:27.315955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.478 [2024-10-07 07:44:27.315965] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbbd50, cid 0, qid 0 00:25:23.478 [2024-10-07 07:44:27.316045] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.478 [2024-10-07 07:44:27.316051] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.478 [2024-10-07 07:44:27.316054] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.316057] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbbd50) on tqpair=0xd51040 00:25:23.478 [2024-10-07 07:44:27.316067] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:23.478 [2024-10-07 07:44:27.316071] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:23.478 [2024-10-07 07:44:27.316078] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:23.478 [2024-10-07 07:44:27.316085] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:23.478 [2024-10-07 07:44:27.316093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.316096] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.316099] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd51040) 00:25:23.478 [2024-10-07 07:44:27.316104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.478 [2024-10-07 07:44:27.316115] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbbd50, cid 0, qid 0 00:25:23.478 [2024-10-07 07:44:27.316236] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:23.478 [2024-10-07 07:44:27.316242] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:23.478 [2024-10-07 07:44:27.316245] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.316248] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd51040): datao=0, datal=4096, cccid=0 00:25:23.478 [2024-10-07 07:44:27.316254] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdbbd50) on tqpair(0xd51040): expected_datao=0, payload_size=4096 00:25:23.478 [2024-10-07 07:44:27.316282] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.316286] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357210] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.478 [2024-10-07 07:44:27.357224] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.478 [2024-10-07 07:44:27.357227] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357230] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbbd50) on tqpair=0xd51040 00:25:23.478 [2024-10-07 07:44:27.357237] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:23.478 [2024-10-07 07:44:27.357242] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:23.478 [2024-10-07 07:44:27.357246] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:23.478 [2024-10-07 07:44:27.357250] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:23.478 [2024-10-07 07:44:27.357253] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:23.478 [2024-10-07 07:44:27.357258] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:23.478 [2024-10-07 07:44:27.357270] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:23.478 [2024-10-07 07:44:27.357276] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357280] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357283] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd51040) 00:25:23.478 [2024-10-07 07:44:27.357290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:23.478 [2024-10-07 07:44:27.357302] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbbd50, cid 0, qid 0 00:25:23.478 [2024-10-07 07:44:27.357380] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.478 [2024-10-07 07:44:27.357386] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.478 [2024-10-07 07:44:27.357389] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357392] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbbd50) on tqpair=0xd51040 00:25:23.478 [2024-10-07 07:44:27.357397] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357401] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357404] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd51040) 00:25:23.478 [2024-10-07 07:44:27.357409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.478 [2024-10-07 07:44:27.357414] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357417] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357420] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd51040) 00:25:23.478 [2024-10-07 07:44:27.357424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.478 [2024-10-07 07:44:27.357429] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357432] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357435] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd51040) 00:25:23.478 [2024-10-07 07:44:27.357440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.478 [2024-10-07 07:44:27.357447] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357451] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357454] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd51040) 00:25:23.478 [2024-10-07 07:44:27.357458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.478 [2024-10-07 07:44:27.357462] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:23.478 [2024-10-07 07:44:27.357472] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:23.478 [2024-10-07 07:44:27.357478] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357481] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357484] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd51040) 00:25:23.478 [2024-10-07 07:44:27.357489] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.478 [2024-10-07 07:44:27.357500] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbbd50, cid 0, qid 0 00:25:23.478 [2024-10-07 07:44:27.357505] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbbeb0, cid 1, qid 0 00:25:23.478 [2024-10-07 07:44:27.357509] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc010, cid 2, qid 0 00:25:23.478 [2024-10-07 07:44:27.357512] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc170, cid 3, qid 0 00:25:23.478 [2024-10-07 07:44:27.357516] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc2d0, cid 4, qid 0 00:25:23.478 [2024-10-07 07:44:27.357631] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.478 [2024-10-07 07:44:27.357637] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.478 [2024-10-07 07:44:27.357639] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357642] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc2d0) on tqpair=0xd51040 00:25:23.478 [2024-10-07 07:44:27.357646] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:23.478 [2024-10-07 07:44:27.357651] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:23.478 [2024-10-07 07:44:27.357657] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:23.478 [2024-10-07 07:44:27.357664] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:23.478 [2024-10-07 07:44:27.357670] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357673] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357676] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd51040) 00:25:23.478 [2024-10-07 07:44:27.357681] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:23.478 [2024-10-07 07:44:27.357691] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc2d0, cid 4, qid 0 00:25:23.478 [2024-10-07 07:44:27.357772] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.478 [2024-10-07 07:44:27.357777] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.478 [2024-10-07 07:44:27.357780] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.478 [2024-10-07 07:44:27.357783] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc2d0) on tqpair=0xd51040 00:25:23.479 [2024-10-07 07:44:27.357836] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:23.479 [2024-10-07 07:44:27.357846] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:23.479 [2024-10-07 07:44:27.357853] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.479 [2024-10-07 07:44:27.357856] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.479 [2024-10-07 07:44:27.357859] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd51040) 00:25:23.479 [2024-10-07 07:44:27.357864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.479 [2024-10-07 07:44:27.357873] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc2d0, cid 4, qid 0 00:25:23.479 [2024-10-07 07:44:27.357964] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:23.479 [2024-10-07 07:44:27.357970] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:23.479 [2024-10-07 07:44:27.357973] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:23.479 [2024-10-07 07:44:27.357976] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd51040): datao=0, datal=4096, cccid=4 00:25:23.479 [2024-10-07 07:44:27.357980] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdbc2d0) on tqpair(0xd51040): expected_datao=0, payload_size=4096 00:25:23.479 [2024-10-07 07:44:27.358008] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:23.479 [2024-10-07 07:44:27.358012] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:23.479 [2024-10-07 07:44:27.400065] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.479 [2024-10-07 07:44:27.400076] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.479 [2024-10-07 07:44:27.400079] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.479 [2024-10-07 07:44:27.400082] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc2d0) on tqpair=0xd51040 00:25:23.479 [2024-10-07 07:44:27.400096] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:23.479 [2024-10-07 07:44:27.400103] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:23.479 [2024-10-07 07:44:27.400112] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:23.479 [2024-10-07 07:44:27.400119] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.479 [2024-10-07 07:44:27.400122] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.479 [2024-10-07 07:44:27.400125] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd51040) 00:25:23.479 [2024-10-07 07:44:27.400131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.479 [2024-10-07 07:44:27.400144] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc2d0, cid 4, qid 0 00:25:23.479 [2024-10-07 07:44:27.400307] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:23.479 [2024-10-07 07:44:27.400313] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:23.479 [2024-10-07 07:44:27.400317] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:23.479 [2024-10-07 07:44:27.400319] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd51040): datao=0, datal=4096, cccid=4 00:25:23.479 [2024-10-07 07:44:27.400323] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdbc2d0) on tqpair(0xd51040): expected_datao=0, payload_size=4096 00:25:23.479 [2024-10-07 07:44:27.400352] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:23.479 [2024-10-07 07:44:27.400356] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:23.740 [2024-10-07 07:44:27.442068] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.740 [2024-10-07 07:44:27.442081] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.740 [2024-10-07 07:44:27.442084] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.740 [2024-10-07 07:44:27.442087] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc2d0) on tqpair=0xd51040 00:25:23.740 [2024-10-07 07:44:27.442103] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:23.740 [2024-10-07 07:44:27.442113] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:23.740 [2024-10-07 07:44:27.442120] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.740 [2024-10-07 07:44:27.442123] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.740 [2024-10-07 07:44:27.442126] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd51040) 00:25:23.740 [2024-10-07 07:44:27.442133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.740 [2024-10-07 07:44:27.442145] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc2d0, cid 4, qid 0 00:25:23.740 [2024-10-07 07:44:27.442253] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:23.740 [2024-10-07 07:44:27.442259] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:23.740 [2024-10-07 07:44:27.442262] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:23.740 [2024-10-07 07:44:27.442265] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd51040): datao=0, datal=4096, cccid=4 00:25:23.740 [2024-10-07 07:44:27.442269] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdbc2d0) on tqpair(0xd51040): expected_datao=0, payload_size=4096 00:25:23.740 [2024-10-07 07:44:27.442275] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:23.740 [2024-10-07 07:44:27.442278] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:23.740 [2024-10-07 07:44:27.483211] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.740 [2024-10-07 07:44:27.483223] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.740 [2024-10-07 07:44:27.483226] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.740 [2024-10-07 07:44:27.483229] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc2d0) on tqpair=0xd51040 00:25:23.740 [2024-10-07 07:44:27.483238] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:23.740 [2024-10-07 07:44:27.483246] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:23.740 [2024-10-07 07:44:27.483254] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:23.740 [2024-10-07 07:44:27.483259] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:23.740 [2024-10-07 07:44:27.483264] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:23.740 [2024-10-07 07:44:27.483268] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:23.740 [2024-10-07 07:44:27.483272] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:23.740 [2024-10-07 07:44:27.483276] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:23.740 [2024-10-07 07:44:27.483288] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.740 [2024-10-07 07:44:27.483292] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.740 [2024-10-07 07:44:27.483295] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd51040) 00:25:23.740 [2024-10-07 07:44:27.483303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.741 [2024-10-07 07:44:27.483309] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483312] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483315] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd51040) 00:25:23.741 [2024-10-07 07:44:27.483320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.741 [2024-10-07 07:44:27.483335] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc2d0, cid 4, qid 0 00:25:23.741 [2024-10-07 07:44:27.483340] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc430, cid 5, qid 0 00:25:23.741 [2024-10-07 07:44:27.483432] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.741 [2024-10-07 07:44:27.483438] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.741 [2024-10-07 07:44:27.483441] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483444] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc2d0) on tqpair=0xd51040 00:25:23.741 [2024-10-07 07:44:27.483450] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.741 [2024-10-07 07:44:27.483454] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.741 [2024-10-07 07:44:27.483457] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483460] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc430) on tqpair=0xd51040 00:25:23.741 [2024-10-07 07:44:27.483469] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483472] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483475] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd51040) 00:25:23.741 [2024-10-07 07:44:27.483480] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.741 [2024-10-07 07:44:27.483490] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc430, cid 5, qid 0 00:25:23.741 [2024-10-07 07:44:27.483570] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.741 [2024-10-07 07:44:27.483576] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.741 [2024-10-07 07:44:27.483579] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483582] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc430) on tqpair=0xd51040 00:25:23.741 [2024-10-07 07:44:27.483589] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483593] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483595] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd51040) 00:25:23.741 [2024-10-07 07:44:27.483601] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.741 [2024-10-07 07:44:27.483610] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc430, cid 5, qid 0 00:25:23.741 [2024-10-07 07:44:27.483690] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.741 [2024-10-07 07:44:27.483695] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.741 [2024-10-07 07:44:27.483698] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483701] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc430) on tqpair=0xd51040 00:25:23.741 [2024-10-07 07:44:27.483709] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483712] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483715] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd51040) 00:25:23.741 [2024-10-07 07:44:27.483720] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.741 [2024-10-07 07:44:27.483731] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc430, cid 5, qid 0 00:25:23.741 [2024-10-07 07:44:27.483809] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.741 [2024-10-07 07:44:27.483815] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.741 [2024-10-07 07:44:27.483817] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483820] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc430) on tqpair=0xd51040 00:25:23.741 [2024-10-07 07:44:27.483831] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483835] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483838] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd51040) 00:25:23.741 [2024-10-07 07:44:27.483843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.741 [2024-10-07 07:44:27.483849] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483852] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483855] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd51040) 00:25:23.741 [2024-10-07 07:44:27.483860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.741 [2024-10-07 07:44:27.483866] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483869] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483872] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xd51040) 00:25:23.741 [2024-10-07 07:44:27.483877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.741 [2024-10-07 07:44:27.483883] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483887] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.483890] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd51040) 00:25:23.741 [2024-10-07 07:44:27.483895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.741 [2024-10-07 07:44:27.483905] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc430, cid 5, qid 0 00:25:23.741 [2024-10-07 07:44:27.483909] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc2d0, cid 4, qid 0 00:25:23.741 [2024-10-07 07:44:27.483913] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc590, cid 6, qid 0 00:25:23.741 [2024-10-07 07:44:27.483917] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc6f0, cid 7, qid 0 00:25:23.741 [2024-10-07 07:44:27.484069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:23.741 [2024-10-07 07:44:27.484075] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:23.741 [2024-10-07 07:44:27.484078] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484082] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd51040): datao=0, datal=8192, cccid=5 00:25:23.741 [2024-10-07 07:44:27.484085] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdbc430) on tqpair(0xd51040): expected_datao=0, payload_size=8192 00:25:23.741 [2024-10-07 07:44:27.484140] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484144] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484152] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:23.741 [2024-10-07 07:44:27.484157] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:23.741 [2024-10-07 07:44:27.484162] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484165] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd51040): datao=0, datal=512, cccid=4 00:25:23.741 [2024-10-07 07:44:27.484169] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdbc2d0) on tqpair(0xd51040): expected_datao=0, payload_size=512 00:25:23.741 [2024-10-07 07:44:27.484175] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484177] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484182] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:23.741 [2024-10-07 07:44:27.484187] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:23.741 [2024-10-07 07:44:27.484190] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484192] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd51040): datao=0, datal=512, cccid=6 00:25:23.741 [2024-10-07 07:44:27.484196] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdbc590) on tqpair(0xd51040): expected_datao=0, payload_size=512 00:25:23.741 [2024-10-07 07:44:27.484202] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484205] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484209] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:23.741 [2024-10-07 07:44:27.484214] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:23.741 [2024-10-07 07:44:27.484217] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484219] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd51040): datao=0, datal=4096, cccid=7 00:25:23.741 [2024-10-07 07:44:27.484223] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdbc6f0) on tqpair(0xd51040): expected_datao=0, payload_size=4096 00:25:23.741 [2024-10-07 07:44:27.484229] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484232] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484249] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.741 [2024-10-07 07:44:27.484254] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.741 [2024-10-07 07:44:27.484257] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484260] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc430) on tqpair=0xd51040 00:25:23.741 [2024-10-07 07:44:27.484271] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.741 [2024-10-07 07:44:27.484276] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.741 [2024-10-07 07:44:27.484279] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484282] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc2d0) on tqpair=0xd51040 00:25:23.741 [2024-10-07 07:44:27.484289] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.741 [2024-10-07 07:44:27.484294] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.741 [2024-10-07 07:44:27.484297] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484300] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc590) on tqpair=0xd51040 00:25:23.741 [2024-10-07 07:44:27.484305] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.741 [2024-10-07 07:44:27.484310] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.741 [2024-10-07 07:44:27.484313] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.741 [2024-10-07 07:44:27.484316] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc6f0) on tqpair=0xd51040 00:25:23.741 ===================================================== 00:25:23.742 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:23.742 ===================================================== 00:25:23.742 Controller Capabilities/Features 00:25:23.742 ================================ 00:25:23.742 Vendor ID: 8086 00:25:23.742 Subsystem Vendor ID: 8086 00:25:23.742 Serial Number: SPDK00000000000001 00:25:23.742 Model Number: SPDK bdev Controller 00:25:23.742 Firmware Version: 24.01.1 00:25:23.742 Recommended Arb Burst: 6 00:25:23.742 IEEE OUI Identifier: e4 d2 5c 00:25:23.742 Multi-path I/O 00:25:23.742 May have multiple subsystem ports: Yes 00:25:23.742 May have multiple controllers: Yes 00:25:23.742 Associated with SR-IOV VF: No 00:25:23.742 Max Data Transfer Size: 131072 00:25:23.742 Max Number of Namespaces: 32 00:25:23.742 Max Number of I/O Queues: 127 00:25:23.742 NVMe Specification Version (VS): 1.3 00:25:23.742 NVMe Specification Version (Identify): 1.3 00:25:23.742 Maximum Queue Entries: 128 00:25:23.742 Contiguous Queues Required: Yes 00:25:23.742 Arbitration Mechanisms Supported 00:25:23.742 Weighted Round Robin: Not Supported 00:25:23.742 Vendor Specific: Not Supported 00:25:23.742 Reset Timeout: 15000 ms 00:25:23.742 Doorbell Stride: 4 bytes 00:25:23.742 NVM Subsystem Reset: Not Supported 00:25:23.742 Command Sets Supported 00:25:23.742 NVM Command Set: Supported 00:25:23.742 Boot Partition: Not Supported 00:25:23.742 Memory Page Size Minimum: 4096 bytes 00:25:23.742 Memory Page Size Maximum: 4096 bytes 00:25:23.742 Persistent Memory Region: Not Supported 00:25:23.742 Optional Asynchronous Events Supported 00:25:23.742 Namespace Attribute Notices: Supported 00:25:23.742 Firmware Activation Notices: Not Supported 00:25:23.742 ANA Change Notices: Not Supported 00:25:23.742 PLE Aggregate Log Change Notices: Not Supported 00:25:23.742 LBA Status Info Alert Notices: Not Supported 00:25:23.742 EGE Aggregate Log Change Notices: Not Supported 00:25:23.742 Normal NVM Subsystem Shutdown event: Not Supported 00:25:23.742 Zone Descriptor Change Notices: Not Supported 00:25:23.742 Discovery Log Change Notices: Not Supported 00:25:23.742 Controller Attributes 00:25:23.742 128-bit Host Identifier: Supported 00:25:23.742 Non-Operational Permissive Mode: Not Supported 00:25:23.742 NVM Sets: Not Supported 00:25:23.742 Read Recovery Levels: Not Supported 00:25:23.742 Endurance Groups: Not Supported 00:25:23.742 Predictable Latency Mode: Not Supported 00:25:23.742 Traffic Based Keep ALive: Not Supported 00:25:23.742 Namespace Granularity: Not Supported 00:25:23.742 SQ Associations: Not Supported 00:25:23.742 UUID List: Not Supported 00:25:23.742 Multi-Domain Subsystem: Not Supported 00:25:23.742 Fixed Capacity Management: Not Supported 00:25:23.742 Variable Capacity Management: Not Supported 00:25:23.742 Delete Endurance Group: Not Supported 00:25:23.742 Delete NVM Set: Not Supported 00:25:23.742 Extended LBA Formats Supported: Not Supported 00:25:23.742 Flexible Data Placement Supported: Not Supported 00:25:23.742 00:25:23.742 Controller Memory Buffer Support 00:25:23.742 ================================ 00:25:23.742 Supported: No 00:25:23.742 00:25:23.742 Persistent Memory Region Support 00:25:23.742 ================================ 00:25:23.742 Supported: No 00:25:23.742 00:25:23.742 Admin Command Set Attributes 00:25:23.742 ============================ 00:25:23.742 Security Send/Receive: Not Supported 00:25:23.742 Format NVM: Not Supported 00:25:23.742 Firmware Activate/Download: Not Supported 00:25:23.742 Namespace Management: Not Supported 00:25:23.742 Device Self-Test: Not Supported 00:25:23.742 Directives: Not Supported 00:25:23.742 NVMe-MI: Not Supported 00:25:23.742 Virtualization Management: Not Supported 00:25:23.742 Doorbell Buffer Config: Not Supported 00:25:23.742 Get LBA Status Capability: Not Supported 00:25:23.742 Command & Feature Lockdown Capability: Not Supported 00:25:23.742 Abort Command Limit: 4 00:25:23.742 Async Event Request Limit: 4 00:25:23.742 Number of Firmware Slots: N/A 00:25:23.742 Firmware Slot 1 Read-Only: N/A 00:25:23.742 Firmware Activation Without Reset: N/A 00:25:23.742 Multiple Update Detection Support: N/A 00:25:23.742 Firmware Update Granularity: No Information Provided 00:25:23.742 Per-Namespace SMART Log: No 00:25:23.742 Asymmetric Namespace Access Log Page: Not Supported 00:25:23.742 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:23.742 Command Effects Log Page: Supported 00:25:23.742 Get Log Page Extended Data: Supported 00:25:23.742 Telemetry Log Pages: Not Supported 00:25:23.742 Persistent Event Log Pages: Not Supported 00:25:23.742 Supported Log Pages Log Page: May Support 00:25:23.742 Commands Supported & Effects Log Page: Not Supported 00:25:23.742 Feature Identifiers & Effects Log Page:May Support 00:25:23.742 NVMe-MI Commands & Effects Log Page: May Support 00:25:23.742 Data Area 4 for Telemetry Log: Not Supported 00:25:23.742 Error Log Page Entries Supported: 128 00:25:23.742 Keep Alive: Supported 00:25:23.742 Keep Alive Granularity: 10000 ms 00:25:23.742 00:25:23.742 NVM Command Set Attributes 00:25:23.742 ========================== 00:25:23.742 Submission Queue Entry Size 00:25:23.742 Max: 64 00:25:23.742 Min: 64 00:25:23.742 Completion Queue Entry Size 00:25:23.742 Max: 16 00:25:23.742 Min: 16 00:25:23.742 Number of Namespaces: 32 00:25:23.742 Compare Command: Supported 00:25:23.742 Write Uncorrectable Command: Not Supported 00:25:23.742 Dataset Management Command: Supported 00:25:23.742 Write Zeroes Command: Supported 00:25:23.742 Set Features Save Field: Not Supported 00:25:23.742 Reservations: Supported 00:25:23.742 Timestamp: Not Supported 00:25:23.742 Copy: Supported 00:25:23.742 Volatile Write Cache: Present 00:25:23.742 Atomic Write Unit (Normal): 1 00:25:23.742 Atomic Write Unit (PFail): 1 00:25:23.742 Atomic Compare & Write Unit: 1 00:25:23.742 Fused Compare & Write: Supported 00:25:23.742 Scatter-Gather List 00:25:23.742 SGL Command Set: Supported 00:25:23.742 SGL Keyed: Supported 00:25:23.742 SGL Bit Bucket Descriptor: Not Supported 00:25:23.742 SGL Metadata Pointer: Not Supported 00:25:23.742 Oversized SGL: Not Supported 00:25:23.742 SGL Metadata Address: Not Supported 00:25:23.742 SGL Offset: Supported 00:25:23.742 Transport SGL Data Block: Not Supported 00:25:23.742 Replay Protected Memory Block: Not Supported 00:25:23.742 00:25:23.742 Firmware Slot Information 00:25:23.742 ========================= 00:25:23.742 Active slot: 1 00:25:23.742 Slot 1 Firmware Revision: 24.01.1 00:25:23.742 00:25:23.742 00:25:23.742 Commands Supported and Effects 00:25:23.742 ============================== 00:25:23.742 Admin Commands 00:25:23.742 -------------- 00:25:23.742 Get Log Page (02h): Supported 00:25:23.742 Identify (06h): Supported 00:25:23.742 Abort (08h): Supported 00:25:23.742 Set Features (09h): Supported 00:25:23.742 Get Features (0Ah): Supported 00:25:23.742 Asynchronous Event Request (0Ch): Supported 00:25:23.742 Keep Alive (18h): Supported 00:25:23.742 I/O Commands 00:25:23.742 ------------ 00:25:23.742 Flush (00h): Supported LBA-Change 00:25:23.742 Write (01h): Supported LBA-Change 00:25:23.742 Read (02h): Supported 00:25:23.742 Compare (05h): Supported 00:25:23.742 Write Zeroes (08h): Supported LBA-Change 00:25:23.742 Dataset Management (09h): Supported LBA-Change 00:25:23.742 Copy (19h): Supported LBA-Change 00:25:23.742 Unknown (79h): Supported LBA-Change 00:25:23.742 Unknown (7Ah): Supported 00:25:23.742 00:25:23.742 Error Log 00:25:23.742 ========= 00:25:23.742 00:25:23.742 Arbitration 00:25:23.742 =========== 00:25:23.742 Arbitration Burst: 1 00:25:23.742 00:25:23.742 Power Management 00:25:23.742 ================ 00:25:23.742 Number of Power States: 1 00:25:23.742 Current Power State: Power State #0 00:25:23.742 Power State #0: 00:25:23.742 Max Power: 0.00 W 00:25:23.742 Non-Operational State: Operational 00:25:23.742 Entry Latency: Not Reported 00:25:23.742 Exit Latency: Not Reported 00:25:23.742 Relative Read Throughput: 0 00:25:23.742 Relative Read Latency: 0 00:25:23.742 Relative Write Throughput: 0 00:25:23.742 Relative Write Latency: 0 00:25:23.742 Idle Power: Not Reported 00:25:23.742 Active Power: Not Reported 00:25:23.742 Non-Operational Permissive Mode: Not Supported 00:25:23.742 00:25:23.742 Health Information 00:25:23.742 ================== 00:25:23.742 Critical Warnings: 00:25:23.742 Available Spare Space: OK 00:25:23.742 Temperature: OK 00:25:23.742 Device Reliability: OK 00:25:23.743 Read Only: No 00:25:23.743 Volatile Memory Backup: OK 00:25:23.743 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:23.743 Temperature Threshold: [2024-10-07 07:44:27.484399] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.484403] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.484407] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd51040) 00:25:23.743 [2024-10-07 07:44:27.484414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-10-07 07:44:27.484425] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc6f0, cid 7, qid 0 00:25:23.743 [2024-10-07 07:44:27.484508] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.743 [2024-10-07 07:44:27.484514] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.743 [2024-10-07 07:44:27.484517] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.484520] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc6f0) on tqpair=0xd51040 00:25:23.743 [2024-10-07 07:44:27.484545] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:23.743 [2024-10-07 07:44:27.484556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.743 [2024-10-07 07:44:27.484562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.743 [2024-10-07 07:44:27.484567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.743 [2024-10-07 07:44:27.484572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.743 [2024-10-07 07:44:27.484579] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.484582] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.484585] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd51040) 00:25:23.743 [2024-10-07 07:44:27.484591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-10-07 07:44:27.484602] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc170, cid 3, qid 0 00:25:23.743 [2024-10-07 07:44:27.484679] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.743 [2024-10-07 07:44:27.484685] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.743 [2024-10-07 07:44:27.484687] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.484690] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc170) on tqpair=0xd51040 00:25:23.743 [2024-10-07 07:44:27.484696] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.484699] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.484702] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd51040) 00:25:23.743 [2024-10-07 07:44:27.484707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-10-07 07:44:27.484720] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc170, cid 3, qid 0 00:25:23.743 [2024-10-07 07:44:27.484810] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.743 [2024-10-07 07:44:27.484815] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.743 [2024-10-07 07:44:27.484818] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.484821] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc170) on tqpair=0xd51040 00:25:23.743 [2024-10-07 07:44:27.484825] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:23.743 [2024-10-07 07:44:27.484829] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:23.743 [2024-10-07 07:44:27.484837] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.484840] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.484843] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd51040) 00:25:23.743 [2024-10-07 07:44:27.484849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-10-07 07:44:27.484860] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc170, cid 3, qid 0 00:25:23.743 [2024-10-07 07:44:27.484940] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.743 [2024-10-07 07:44:27.484946] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.743 [2024-10-07 07:44:27.484949] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.484952] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc170) on tqpair=0xd51040 00:25:23.743 [2024-10-07 07:44:27.484960] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.484964] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.484967] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd51040) 00:25:23.743 [2024-10-07 07:44:27.484972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-10-07 07:44:27.484981] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc170, cid 3, qid 0 00:25:23.743 [2024-10-07 07:44:27.485054] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.743 [2024-10-07 07:44:27.485069] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.743 [2024-10-07 07:44:27.485072] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485075] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc170) on tqpair=0xd51040 00:25:23.743 [2024-10-07 07:44:27.485083] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485086] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485089] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd51040) 00:25:23.743 [2024-10-07 07:44:27.485095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-10-07 07:44:27.485105] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc170, cid 3, qid 0 00:25:23.743 [2024-10-07 07:44:27.485181] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.743 [2024-10-07 07:44:27.485187] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.743 [2024-10-07 07:44:27.485189] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485193] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc170) on tqpair=0xd51040 00:25:23.743 [2024-10-07 07:44:27.485201] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485204] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485207] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd51040) 00:25:23.743 [2024-10-07 07:44:27.485212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-10-07 07:44:27.485222] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc170, cid 3, qid 0 00:25:23.743 [2024-10-07 07:44:27.485298] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.743 [2024-10-07 07:44:27.485304] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.743 [2024-10-07 07:44:27.485306] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485309] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc170) on tqpair=0xd51040 00:25:23.743 [2024-10-07 07:44:27.485317] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485321] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485324] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd51040) 00:25:23.743 [2024-10-07 07:44:27.485329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-10-07 07:44:27.485341] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc170, cid 3, qid 0 00:25:23.743 [2024-10-07 07:44:27.485416] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.743 [2024-10-07 07:44:27.485422] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.743 [2024-10-07 07:44:27.485424] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485428] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc170) on tqpair=0xd51040 00:25:23.743 [2024-10-07 07:44:27.485436] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485439] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485442] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd51040) 00:25:23.743 [2024-10-07 07:44:27.485448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-10-07 07:44:27.485457] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc170, cid 3, qid 0 00:25:23.743 [2024-10-07 07:44:27.485534] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.743 [2024-10-07 07:44:27.485540] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.743 [2024-10-07 07:44:27.485543] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485546] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc170) on tqpair=0xd51040 00:25:23.743 [2024-10-07 07:44:27.485554] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485557] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485560] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd51040) 00:25:23.743 [2024-10-07 07:44:27.485566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.743 [2024-10-07 07:44:27.485575] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc170, cid 3, qid 0 00:25:23.743 [2024-10-07 07:44:27.485652] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.743 [2024-10-07 07:44:27.485658] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.743 [2024-10-07 07:44:27.485660] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485663] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc170) on tqpair=0xd51040 00:25:23.743 [2024-10-07 07:44:27.485671] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485675] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.743 [2024-10-07 07:44:27.485677] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd51040) 00:25:23.743 [2024-10-07 07:44:27.485683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-10-07 07:44:27.485692] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc170, cid 3, qid 0 00:25:23.744 [2024-10-07 07:44:27.485767] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.744 [2024-10-07 07:44:27.485773] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.744 [2024-10-07 07:44:27.485776] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.744 [2024-10-07 07:44:27.485779] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc170) on tqpair=0xd51040 00:25:23.744 [2024-10-07 07:44:27.485787] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.744 [2024-10-07 07:44:27.485790] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.744 [2024-10-07 07:44:27.485793] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd51040) 00:25:23.744 [2024-10-07 07:44:27.485798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-10-07 07:44:27.485808] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc170, cid 3, qid 0 00:25:23.744 [2024-10-07 07:44:27.485885] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.744 [2024-10-07 07:44:27.485891] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.744 [2024-10-07 07:44:27.485894] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.744 [2024-10-07 07:44:27.485897] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc170) on tqpair=0xd51040 00:25:23.744 [2024-10-07 07:44:27.485905] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.744 [2024-10-07 07:44:27.485908] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.744 [2024-10-07 07:44:27.485911] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd51040) 00:25:23.744 [2024-10-07 07:44:27.485916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-10-07 07:44:27.485926] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc170, cid 3, qid 0 00:25:23.744 [2024-10-07 07:44:27.485999] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.744 [2024-10-07 07:44:27.486004] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.744 [2024-10-07 07:44:27.486007] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.744 [2024-10-07 07:44:27.486010] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc170) on tqpair=0xd51040 00:25:23.744 [2024-10-07 07:44:27.486018] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.744 [2024-10-07 07:44:27.486022] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.744 [2024-10-07 07:44:27.486025] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd51040) 00:25:23.744 [2024-10-07 07:44:27.486030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-10-07 07:44:27.486039] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc170, cid 3, qid 0 00:25:23.744 [2024-10-07 07:44:27.490067] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.744 [2024-10-07 07:44:27.490076] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.744 [2024-10-07 07:44:27.490079] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.744 [2024-10-07 07:44:27.490082] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc170) on tqpair=0xd51040 00:25:23.744 [2024-10-07 07:44:27.490092] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:23.744 [2024-10-07 07:44:27.490096] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:23.744 [2024-10-07 07:44:27.490099] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd51040) 00:25:23.744 [2024-10-07 07:44:27.490105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.744 [2024-10-07 07:44:27.490117] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbc170, cid 3, qid 0 00:25:23.744 [2024-10-07 07:44:27.490260] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:23.744 [2024-10-07 07:44:27.490265] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:23.744 [2024-10-07 07:44:27.490268] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:23.744 [2024-10-07 07:44:27.490272] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdbc170) on tqpair=0xd51040 00:25:23.744 [2024-10-07 07:44:27.490278] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:25:23.744 0 Kelvin (-273 Celsius) 00:25:23.744 Available Spare: 0% 00:25:23.744 Available Spare Threshold: 0% 00:25:23.744 Life Percentage Used: 0% 00:25:23.744 Data Units Read: 0 00:25:23.744 Data Units Written: 0 00:25:23.744 Host Read Commands: 0 00:25:23.744 Host Write Commands: 0 00:25:23.744 Controller Busy Time: 0 minutes 00:25:23.744 Power Cycles: 0 00:25:23.744 Power On Hours: 0 hours 00:25:23.744 Unsafe Shutdowns: 0 00:25:23.744 Unrecoverable Media Errors: 0 00:25:23.744 Lifetime Error Log Entries: 0 00:25:23.744 Warning Temperature Time: 0 minutes 00:25:23.744 Critical Temperature Time: 0 minutes 00:25:23.744 00:25:23.744 Number of Queues 00:25:23.744 ================ 00:25:23.744 Number of I/O Submission Queues: 127 00:25:23.744 Number of I/O Completion Queues: 127 00:25:23.744 00:25:23.744 Active Namespaces 00:25:23.744 ================= 00:25:23.744 Namespace ID:1 00:25:23.744 Error Recovery Timeout: Unlimited 00:25:23.744 Command Set Identifier: NVM (00h) 00:25:23.744 Deallocate: Supported 00:25:23.744 Deallocated/Unwritten Error: Not Supported 00:25:23.744 Deallocated Read Value: Unknown 00:25:23.744 Deallocate in Write Zeroes: Not Supported 00:25:23.744 Deallocated Guard Field: 0xFFFF 00:25:23.744 Flush: Supported 00:25:23.744 Reservation: Supported 00:25:23.744 Namespace Sharing Capabilities: Multiple Controllers 00:25:23.744 Size (in LBAs): 131072 (0GiB) 00:25:23.744 Capacity (in LBAs): 131072 (0GiB) 00:25:23.744 Utilization (in LBAs): 131072 (0GiB) 00:25:23.744 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:23.744 EUI64: ABCDEF0123456789 00:25:23.744 UUID: b9844a1c-dd3e-4208-951b-0a63067f9182 00:25:23.744 Thin Provisioning: Not Supported 00:25:23.744 Per-NS Atomic Units: Yes 00:25:23.744 Atomic Boundary Size (Normal): 0 00:25:23.744 Atomic Boundary Size (PFail): 0 00:25:23.744 Atomic Boundary Offset: 0 00:25:23.744 Maximum Single Source Range Length: 65535 00:25:23.744 Maximum Copy Length: 65535 00:25:23.744 Maximum Source Range Count: 1 00:25:23.744 NGUID/EUI64 Never Reused: No 00:25:23.744 Namespace Write Protected: No 00:25:23.744 Number of LBA Formats: 1 00:25:23.744 Current LBA Format: LBA Format #00 00:25:23.744 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:23.744 00:25:23.744 07:44:27 -- host/identify.sh@51 -- # sync 00:25:23.744 07:44:27 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:23.744 07:44:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.744 07:44:27 -- common/autotest_common.sh@10 -- # set +x 00:25:23.744 07:44:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.744 07:44:27 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:23.744 07:44:27 -- host/identify.sh@56 -- # nvmftestfini 00:25:23.744 07:44:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:23.744 07:44:27 -- nvmf/common.sh@116 -- # sync 00:25:23.744 07:44:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:23.744 07:44:27 -- nvmf/common.sh@119 -- # set +e 00:25:23.744 07:44:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:23.744 07:44:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:23.744 rmmod nvme_tcp 00:25:23.744 rmmod nvme_fabrics 00:25:23.744 rmmod nvme_keyring 00:25:23.744 07:44:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:23.744 07:44:27 -- nvmf/common.sh@123 -- # set -e 00:25:23.744 07:44:27 -- nvmf/common.sh@124 -- # return 0 00:25:23.744 07:44:27 -- nvmf/common.sh@477 -- # '[' -n 40312 ']' 00:25:23.744 07:44:27 -- nvmf/common.sh@478 -- # killprocess 40312 00:25:23.744 07:44:27 -- common/autotest_common.sh@926 -- # '[' -z 40312 ']' 00:25:23.744 07:44:27 -- common/autotest_common.sh@930 -- # kill -0 40312 00:25:23.744 07:44:27 -- common/autotest_common.sh@931 -- # uname 00:25:23.744 07:44:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:23.744 07:44:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 40312 00:25:23.744 07:44:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:23.744 07:44:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:23.744 07:44:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 40312' 00:25:23.744 killing process with pid 40312 00:25:23.744 07:44:27 -- common/autotest_common.sh@945 -- # kill 40312 00:25:23.744 [2024-10-07 07:44:27.624858] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:23.744 07:44:27 -- common/autotest_common.sh@950 -- # wait 40312 00:25:24.004 07:44:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:24.004 07:44:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:24.004 07:44:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:24.004 07:44:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:24.004 07:44:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:24.004 07:44:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.004 07:44:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.004 07:44:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.545 07:44:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:26.545 00:25:26.545 real 0m9.580s 00:25:26.545 user 0m8.034s 00:25:26.545 sys 0m4.630s 00:25:26.545 07:44:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:26.545 07:44:29 -- common/autotest_common.sh@10 -- # set +x 00:25:26.545 ************************************ 00:25:26.545 END TEST nvmf_identify 00:25:26.545 ************************************ 00:25:26.545 07:44:29 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:26.545 07:44:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:26.545 07:44:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:26.545 07:44:29 -- common/autotest_common.sh@10 -- # set +x 00:25:26.545 ************************************ 00:25:26.545 START TEST nvmf_perf 00:25:26.545 ************************************ 00:25:26.545 07:44:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:26.545 * Looking for test storage... 00:25:26.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:26.545 07:44:30 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.545 07:44:30 -- nvmf/common.sh@7 -- # uname -s 00:25:26.545 07:44:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.545 07:44:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.545 07:44:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.545 07:44:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.545 07:44:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.545 07:44:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.545 07:44:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.545 07:44:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.545 07:44:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.545 07:44:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.545 07:44:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:26.545 07:44:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:26.545 07:44:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.545 07:44:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.545 07:44:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.545 07:44:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.545 07:44:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.545 07:44:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.545 07:44:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.545 07:44:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.545 07:44:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.545 07:44:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.545 07:44:30 -- paths/export.sh@5 -- # export PATH 00:25:26.545 07:44:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.545 07:44:30 -- nvmf/common.sh@46 -- # : 0 00:25:26.545 07:44:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:26.546 07:44:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:26.546 07:44:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:26.546 07:44:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.546 07:44:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.546 07:44:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:26.546 07:44:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:26.546 07:44:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:26.546 07:44:30 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:26.546 07:44:30 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:26.546 07:44:30 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:26.546 07:44:30 -- host/perf.sh@17 -- # nvmftestinit 00:25:26.546 07:44:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:26.546 07:44:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.546 07:44:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:26.546 07:44:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:26.546 07:44:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:26.546 07:44:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.546 07:44:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.546 07:44:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.546 07:44:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:26.546 07:44:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:26.546 07:44:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:26.546 07:44:30 -- common/autotest_common.sh@10 -- # set +x 00:25:31.822 07:44:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:31.822 07:44:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:31.822 07:44:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:31.822 07:44:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:31.822 07:44:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:31.822 07:44:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:31.822 07:44:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:31.822 07:44:35 -- nvmf/common.sh@294 -- # net_devs=() 00:25:31.822 07:44:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:31.822 07:44:35 -- nvmf/common.sh@295 -- # e810=() 00:25:31.822 07:44:35 -- nvmf/common.sh@295 -- # local -ga e810 00:25:31.822 07:44:35 -- nvmf/common.sh@296 -- # x722=() 00:25:31.822 07:44:35 -- nvmf/common.sh@296 -- # local -ga x722 00:25:31.822 07:44:35 -- nvmf/common.sh@297 -- # mlx=() 00:25:31.822 07:44:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:31.822 07:44:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.822 07:44:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.822 07:44:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.822 07:44:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.822 07:44:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.822 07:44:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.822 07:44:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.822 07:44:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.822 07:44:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.822 07:44:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.822 07:44:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.822 07:44:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:31.822 07:44:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:31.822 07:44:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:31.822 07:44:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:31.822 07:44:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:31.822 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:31.822 07:44:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:31.822 07:44:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:31.822 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:31.822 07:44:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:31.822 07:44:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:31.822 07:44:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.822 07:44:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:31.822 07:44:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.822 07:44:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:31.822 Found net devices under 0000:af:00.0: cvl_0_0 00:25:31.822 07:44:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.822 07:44:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:31.822 07:44:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.822 07:44:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:31.822 07:44:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.822 07:44:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:31.822 Found net devices under 0000:af:00.1: cvl_0_1 00:25:31.822 07:44:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.822 07:44:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:31.822 07:44:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:31.822 07:44:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:31.822 07:44:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.822 07:44:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.822 07:44:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.822 07:44:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:31.822 07:44:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.822 07:44:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.822 07:44:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:31.822 07:44:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.822 07:44:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.822 07:44:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:31.822 07:44:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:31.822 07:44:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.822 07:44:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.822 07:44:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.822 07:44:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.822 07:44:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:31.822 07:44:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.822 07:44:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.822 07:44:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.822 07:44:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:31.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:25:31.822 00:25:31.822 --- 10.0.0.2 ping statistics --- 00:25:31.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.822 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:25:31.822 07:44:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:25:31.822 00:25:31.822 --- 10.0.0.1 ping statistics --- 00:25:31.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.822 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:25:31.822 07:44:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.822 07:44:35 -- nvmf/common.sh@410 -- # return 0 00:25:31.822 07:44:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:31.822 07:44:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.822 07:44:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:31.822 07:44:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.822 07:44:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:31.822 07:44:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:31.822 07:44:35 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:31.822 07:44:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:31.822 07:44:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:31.823 07:44:35 -- common/autotest_common.sh@10 -- # set +x 00:25:31.823 07:44:35 -- nvmf/common.sh@469 -- # nvmfpid=44030 00:25:31.823 07:44:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:31.823 07:44:35 -- nvmf/common.sh@470 -- # waitforlisten 44030 00:25:31.823 07:44:35 -- common/autotest_common.sh@819 -- # '[' -z 44030 ']' 00:25:31.823 07:44:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.823 07:44:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:31.823 07:44:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.823 07:44:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:31.823 07:44:35 -- common/autotest_common.sh@10 -- # set +x 00:25:31.823 [2024-10-07 07:44:35.557656] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:31.823 [2024-10-07 07:44:35.557697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.823 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.823 [2024-10-07 07:44:35.616219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:31.823 [2024-10-07 07:44:35.692199] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:31.823 [2024-10-07 07:44:35.692322] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.823 [2024-10-07 07:44:35.692329] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.823 [2024-10-07 07:44:35.692336] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.823 [2024-10-07 07:44:35.692383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.823 [2024-10-07 07:44:35.692480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.823 [2024-10-07 07:44:35.692509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.823 [2024-10-07 07:44:35.692510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.761 07:44:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:32.761 07:44:36 -- common/autotest_common.sh@852 -- # return 0 00:25:32.761 07:44:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:32.761 07:44:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:32.761 07:44:36 -- common/autotest_common.sh@10 -- # set +x 00:25:32.761 07:44:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.761 07:44:36 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:32.761 07:44:36 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:36.051 07:44:39 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:36.051 07:44:39 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:36.051 07:44:39 -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:25:36.051 07:44:39 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:36.051 07:44:39 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:36.051 07:44:39 -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:25:36.051 07:44:39 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:36.051 07:44:39 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:36.051 07:44:39 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:36.051 [2024-10-07 07:44:39.986353] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.052 07:44:40 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:36.311 07:44:40 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:36.311 07:44:40 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:36.570 07:44:40 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:36.570 07:44:40 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:36.829 07:44:40 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.829 [2024-10-07 07:44:40.766126] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.829 07:44:40 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:37.087 07:44:40 -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:25:37.087 07:44:40 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:37.087 07:44:40 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:37.087 07:44:40 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:38.462 Initializing NVMe Controllers 00:25:38.462 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:25:38.462 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:25:38.462 Initialization complete. Launching workers. 00:25:38.462 ======================================================== 00:25:38.462 Latency(us) 00:25:38.462 Device Information : IOPS MiB/s Average min max 00:25:38.462 PCIE (0000:5e:00.0) NSID 1 from core 0: 101556.85 396.71 314.67 9.33 6227.03 00:25:38.462 ======================================================== 00:25:38.462 Total : 101556.85 396.71 314.67 9.33 6227.03 00:25:38.462 00:25:38.462 07:44:42 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:38.462 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.841 Initializing NVMe Controllers 00:25:39.841 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:39.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:39.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:39.841 Initialization complete. Launching workers. 00:25:39.841 ======================================================== 00:25:39.841 Latency(us) 00:25:39.841 Device Information : IOPS MiB/s Average min max 00:25:39.841 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 91.00 0.36 11413.40 129.91 45637.51 00:25:39.841 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16467.52 7962.76 47895.78 00:25:39.841 ======================================================== 00:25:39.841 Total : 152.00 0.59 13441.70 129.91 47895.78 00:25:39.841 00:25:39.841 07:44:43 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:39.841 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.778 Initializing NVMe Controllers 00:25:40.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:40.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:40.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:40.778 Initialization complete. Launching workers. 00:25:40.778 ======================================================== 00:25:40.778 Latency(us) 00:25:40.779 Device Information : IOPS MiB/s Average min max 00:25:40.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10914.00 42.63 2935.14 370.42 6414.44 00:25:40.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3947.00 15.42 8143.20 6810.20 15693.45 00:25:40.779 ======================================================== 00:25:40.779 Total : 14861.00 58.05 4318.38 370.42 15693.45 00:25:40.779 00:25:40.779 07:44:44 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:40.779 07:44:44 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:40.779 07:44:44 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:41.037 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.573 Initializing NVMe Controllers 00:25:43.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:43.573 Controller IO queue size 128, less than required. 00:25:43.573 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:43.573 Controller IO queue size 128, less than required. 00:25:43.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:43.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:43.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:43.574 Initialization complete. Launching workers. 00:25:43.574 ======================================================== 00:25:43.574 Latency(us) 00:25:43.574 Device Information : IOPS MiB/s Average min max 00:25:43.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1057.28 264.32 124825.18 59566.18 190814.42 00:25:43.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 618.99 154.75 217212.94 68299.59 327555.35 00:25:43.574 ======================================================== 00:25:43.574 Total : 1676.27 419.07 158940.97 59566.18 327555.35 00:25:43.574 00:25:43.574 07:44:47 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:43.574 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.574 No valid NVMe controllers or AIO or URING devices found 00:25:43.574 Initializing NVMe Controllers 00:25:43.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:43.574 Controller IO queue size 128, less than required. 00:25:43.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:43.574 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:43.574 Controller IO queue size 128, less than required. 00:25:43.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:43.574 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:43.574 WARNING: Some requested NVMe devices were skipped 00:25:43.574 07:44:47 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:43.574 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.112 Initializing NVMe Controllers 00:25:46.112 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:46.112 Controller IO queue size 128, less than required. 00:25:46.112 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.112 Controller IO queue size 128, less than required. 00:25:46.112 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:46.112 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:46.112 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:46.112 Initialization complete. Launching workers. 00:25:46.112 00:25:46.112 ==================== 00:25:46.112 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:46.112 TCP transport: 00:25:46.112 polls: 37388 00:25:46.112 idle_polls: 11757 00:25:46.112 sock_completions: 25631 00:25:46.112 nvme_completions: 4254 00:25:46.112 submitted_requests: 6560 00:25:46.112 queued_requests: 1 00:25:46.112 00:25:46.112 ==================== 00:25:46.112 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:46.112 TCP transport: 00:25:46.112 polls: 37645 00:25:46.112 idle_polls: 12654 00:25:46.112 sock_completions: 24991 00:25:46.112 nvme_completions: 4179 00:25:46.112 submitted_requests: 6413 00:25:46.112 queued_requests: 1 00:25:46.112 ======================================================== 00:25:46.112 Latency(us) 00:25:46.112 Device Information : IOPS MiB/s Average min max 00:25:46.112 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1125.14 281.28 118512.57 60893.66 181530.85 00:25:46.112 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1106.17 276.54 118283.00 46675.00 176072.52 00:25:46.112 ======================================================== 00:25:46.113 Total : 2231.30 557.83 118398.76 46675.00 181530.85 00:25:46.113 00:25:46.113 07:44:49 -- host/perf.sh@66 -- # sync 00:25:46.113 07:44:49 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:46.371 07:44:50 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:46.371 07:44:50 -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:25:46.371 07:44:50 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:49.661 07:44:53 -- host/perf.sh@72 -- # ls_guid=8649b7f5-fce4-4e0d-ad48-5572fcc57a53 00:25:49.661 07:44:53 -- host/perf.sh@73 -- # get_lvs_free_mb 8649b7f5-fce4-4e0d-ad48-5572fcc57a53 00:25:49.661 07:44:53 -- common/autotest_common.sh@1343 -- # local lvs_uuid=8649b7f5-fce4-4e0d-ad48-5572fcc57a53 00:25:49.661 07:44:53 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:49.661 07:44:53 -- common/autotest_common.sh@1345 -- # local fc 00:25:49.661 07:44:53 -- common/autotest_common.sh@1346 -- # local cs 00:25:49.661 07:44:53 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:49.661 07:44:53 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:49.661 { 00:25:49.661 "uuid": "8649b7f5-fce4-4e0d-ad48-5572fcc57a53", 00:25:49.661 "name": "lvs_0", 00:25:49.661 "base_bdev": "Nvme0n1", 00:25:49.661 "total_data_clusters": 238234, 00:25:49.661 "free_clusters": 238234, 00:25:49.661 "block_size": 512, 00:25:49.661 "cluster_size": 4194304 00:25:49.661 } 00:25:49.661 ]' 00:25:49.661 07:44:53 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="8649b7f5-fce4-4e0d-ad48-5572fcc57a53") .free_clusters' 00:25:49.661 07:44:53 -- common/autotest_common.sh@1348 -- # fc=238234 00:25:49.661 07:44:53 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="8649b7f5-fce4-4e0d-ad48-5572fcc57a53") .cluster_size' 00:25:49.661 07:44:53 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:49.661 07:44:53 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:25:49.661 07:44:53 -- common/autotest_common.sh@1353 -- # echo 952936 00:25:49.661 952936 00:25:49.661 07:44:53 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:25:49.661 07:44:53 -- host/perf.sh@78 -- # free_mb=20480 00:25:49.661 07:44:53 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8649b7f5-fce4-4e0d-ad48-5572fcc57a53 lbd_0 20480 00:25:50.229 07:44:54 -- host/perf.sh@80 -- # lb_guid=1d5b8ca6-aba6-483d-8baa-3fce8f1e131e 00:25:50.229 07:44:54 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 1d5b8ca6-aba6-483d-8baa-3fce8f1e131e lvs_n_0 00:25:50.797 07:44:54 -- host/perf.sh@83 -- # ls_nested_guid=69e4a42c-f067-42ea-941f-0e5164c5c547 00:25:50.797 07:44:54 -- host/perf.sh@84 -- # get_lvs_free_mb 69e4a42c-f067-42ea-941f-0e5164c5c547 00:25:50.797 07:44:54 -- common/autotest_common.sh@1343 -- # local lvs_uuid=69e4a42c-f067-42ea-941f-0e5164c5c547 00:25:50.797 07:44:54 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:50.797 07:44:54 -- common/autotest_common.sh@1345 -- # local fc 00:25:50.797 07:44:54 -- common/autotest_common.sh@1346 -- # local cs 00:25:50.797 07:44:54 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:51.056 07:44:54 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:51.056 { 00:25:51.056 "uuid": "8649b7f5-fce4-4e0d-ad48-5572fcc57a53", 00:25:51.056 "name": "lvs_0", 00:25:51.056 "base_bdev": "Nvme0n1", 00:25:51.056 "total_data_clusters": 238234, 00:25:51.056 "free_clusters": 233114, 00:25:51.056 "block_size": 512, 00:25:51.056 "cluster_size": 4194304 00:25:51.056 }, 00:25:51.056 { 00:25:51.056 "uuid": "69e4a42c-f067-42ea-941f-0e5164c5c547", 00:25:51.056 "name": "lvs_n_0", 00:25:51.056 "base_bdev": "1d5b8ca6-aba6-483d-8baa-3fce8f1e131e", 00:25:51.056 "total_data_clusters": 5114, 00:25:51.056 "free_clusters": 5114, 00:25:51.056 "block_size": 512, 00:25:51.056 "cluster_size": 4194304 00:25:51.056 } 00:25:51.056 ]' 00:25:51.056 07:44:54 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="69e4a42c-f067-42ea-941f-0e5164c5c547") .free_clusters' 00:25:51.056 07:44:54 -- common/autotest_common.sh@1348 -- # fc=5114 00:25:51.056 07:44:54 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="69e4a42c-f067-42ea-941f-0e5164c5c547") .cluster_size' 00:25:51.056 07:44:55 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:51.056 07:44:55 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:25:51.056 07:44:55 -- common/autotest_common.sh@1353 -- # echo 20456 00:25:51.056 20456 00:25:51.056 07:44:55 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:51.056 07:44:55 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 69e4a42c-f067-42ea-941f-0e5164c5c547 lbd_nest_0 20456 00:25:51.315 07:44:55 -- host/perf.sh@88 -- # lb_nested_guid=b3df7ed3-ffcc-4e4f-ab88-43e886e94708 00:25:51.315 07:44:55 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:51.575 07:44:55 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:51.575 07:44:55 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b3df7ed3-ffcc-4e4f-ab88-43e886e94708 00:25:51.834 07:44:55 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.834 07:44:55 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:51.834 07:44:55 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:51.834 07:44:55 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:51.834 07:44:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:51.834 07:44:55 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:52.181 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.428 Initializing NVMe Controllers 00:26:04.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:04.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:04.428 Initialization complete. Launching workers. 00:26:04.428 ======================================================== 00:26:04.428 Latency(us) 00:26:04.428 Device Information : IOPS MiB/s Average min max 00:26:04.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.50 0.02 21592.25 147.92 47611.07 00:26:04.428 ======================================================== 00:26:04.428 Total : 46.50 0.02 21592.25 147.92 47611.07 00:26:04.428 00:26:04.428 07:45:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:04.428 07:45:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:04.428 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.547 Initializing NVMe Controllers 00:26:12.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:12.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:12.547 Initialization complete. Launching workers. 00:26:12.547 ======================================================== 00:26:12.547 Latency(us) 00:26:12.547 Device Information : IOPS MiB/s Average min max 00:26:12.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.18 10.27 12177.95 5074.65 47882.56 00:26:12.547 ======================================================== 00:26:12.547 Total : 82.18 10.27 12177.95 5074.65 47882.56 00:26:12.547 00:26:12.547 07:45:16 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:12.547 07:45:16 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:12.548 07:45:16 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:12.807 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.792 Initializing NVMe Controllers 00:26:22.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:22.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:22.793 Initialization complete. Launching workers. 00:26:22.793 ======================================================== 00:26:22.793 Latency(us) 00:26:22.793 Device Information : IOPS MiB/s Average min max 00:26:22.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8903.50 4.35 3594.55 261.49 10158.73 00:26:22.793 ======================================================== 00:26:22.793 Total : 8903.50 4.35 3594.55 261.49 10158.73 00:26:22.793 00:26:22.793 07:45:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:22.793 07:45:26 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:22.793 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.005 Initializing NVMe Controllers 00:26:35.005 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:35.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:35.005 Initialization complete. Launching workers. 00:26:35.005 ======================================================== 00:26:35.005 Latency(us) 00:26:35.005 Device Information : IOPS MiB/s Average min max 00:26:35.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2060.57 257.57 15529.80 1335.44 37235.99 00:26:35.005 ======================================================== 00:26:35.005 Total : 2060.57 257.57 15529.80 1335.44 37235.99 00:26:35.005 00:26:35.005 07:45:37 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:35.005 07:45:37 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:35.005 07:45:37 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:35.005 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.991 Initializing NVMe Controllers 00:26:44.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:44.991 Controller IO queue size 128, less than required. 00:26:44.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:44.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:44.991 Initialization complete. Launching workers. 00:26:44.991 ======================================================== 00:26:44.991 Latency(us) 00:26:44.991 Device Information : IOPS MiB/s Average min max 00:26:44.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15893.16 7.76 8053.96 1140.47 47818.76 00:26:44.991 ======================================================== 00:26:44.991 Total : 15893.16 7.76 8053.96 1140.47 47818.76 00:26:44.991 00:26:44.991 07:45:47 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:44.991 07:45:47 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:44.991 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.974 Initializing NVMe Controllers 00:26:54.974 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:54.974 Controller IO queue size 128, less than required. 00:26:54.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:54.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:54.974 Initialization complete. Launching workers. 00:26:54.974 ======================================================== 00:26:54.974 Latency(us) 00:26:54.974 Device Information : IOPS MiB/s Average min max 00:26:54.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1209.30 151.16 106183.82 16926.10 213767.87 00:26:54.974 ======================================================== 00:26:54.974 Total : 1209.30 151.16 106183.82 16926.10 213767.87 00:26:54.974 00:26:54.974 07:45:57 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:54.974 07:45:58 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b3df7ed3-ffcc-4e4f-ab88-43e886e94708 00:26:54.974 07:45:58 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:54.974 07:45:58 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1d5b8ca6-aba6-483d-8baa-3fce8f1e131e 00:26:55.234 07:45:59 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:55.493 07:45:59 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:55.493 07:45:59 -- host/perf.sh@114 -- # nvmftestfini 00:26:55.493 07:45:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:55.493 07:45:59 -- nvmf/common.sh@116 -- # sync 00:26:55.493 07:45:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:55.493 07:45:59 -- nvmf/common.sh@119 -- # set +e 00:26:55.493 07:45:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:55.493 07:45:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:55.493 rmmod nvme_tcp 00:26:55.493 rmmod nvme_fabrics 00:26:55.493 rmmod nvme_keyring 00:26:55.493 07:45:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:55.493 07:45:59 -- nvmf/common.sh@123 -- # set -e 00:26:55.493 07:45:59 -- nvmf/common.sh@124 -- # return 0 00:26:55.493 07:45:59 -- nvmf/common.sh@477 -- # '[' -n 44030 ']' 00:26:55.493 07:45:59 -- nvmf/common.sh@478 -- # killprocess 44030 00:26:55.493 07:45:59 -- common/autotest_common.sh@926 -- # '[' -z 44030 ']' 00:26:55.493 07:45:59 -- common/autotest_common.sh@930 -- # kill -0 44030 00:26:55.493 07:45:59 -- common/autotest_common.sh@931 -- # uname 00:26:55.493 07:45:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:55.493 07:45:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 44030 00:26:55.493 07:45:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:55.493 07:45:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:55.493 07:45:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 44030' 00:26:55.493 killing process with pid 44030 00:26:55.493 07:45:59 -- common/autotest_common.sh@945 -- # kill 44030 00:26:55.493 07:45:59 -- common/autotest_common.sh@950 -- # wait 44030 00:26:57.400 07:46:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:57.400 07:46:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:57.400 07:46:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:57.400 07:46:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:57.400 07:46:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:57.400 07:46:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.400 07:46:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:57.400 07:46:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.308 07:46:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:59.308 00:26:59.308 real 1m33.024s 00:26:59.308 user 5m34.842s 00:26:59.308 sys 0m15.132s 00:26:59.308 07:46:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.308 07:46:02 -- common/autotest_common.sh@10 -- # set +x 00:26:59.308 ************************************ 00:26:59.308 END TEST nvmf_perf 00:26:59.308 ************************************ 00:26:59.308 07:46:03 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:59.308 07:46:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:59.308 07:46:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:59.308 07:46:03 -- common/autotest_common.sh@10 -- # set +x 00:26:59.308 ************************************ 00:26:59.308 START TEST nvmf_fio_host 00:26:59.308 ************************************ 00:26:59.308 07:46:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:59.308 * Looking for test storage... 00:26:59.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:59.308 07:46:03 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.308 07:46:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.308 07:46:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.308 07:46:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.308 07:46:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.308 07:46:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.308 07:46:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.308 07:46:03 -- paths/export.sh@5 -- # export PATH 00:26:59.308 07:46:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.308 07:46:03 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.308 07:46:03 -- nvmf/common.sh@7 -- # uname -s 00:26:59.308 07:46:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.308 07:46:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.308 07:46:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.308 07:46:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.308 07:46:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.308 07:46:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.308 07:46:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.308 07:46:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.308 07:46:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.308 07:46:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.308 07:46:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:59.308 07:46:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:59.308 07:46:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.308 07:46:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.308 07:46:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.308 07:46:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.308 07:46:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.308 07:46:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.308 07:46:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.308 07:46:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.308 07:46:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.308 07:46:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.308 07:46:03 -- paths/export.sh@5 -- # export PATH 00:26:59.308 07:46:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.308 07:46:03 -- nvmf/common.sh@46 -- # : 0 00:26:59.308 07:46:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:59.308 07:46:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:59.308 07:46:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:59.308 07:46:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.308 07:46:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.308 07:46:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:59.308 07:46:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:59.308 07:46:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:59.308 07:46:03 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:59.308 07:46:03 -- host/fio.sh@14 -- # nvmftestinit 00:26:59.308 07:46:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:59.308 07:46:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.308 07:46:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:59.309 07:46:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:59.309 07:46:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:59.309 07:46:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.309 07:46:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:59.309 07:46:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.309 07:46:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:59.309 07:46:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:59.309 07:46:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:59.309 07:46:03 -- common/autotest_common.sh@10 -- # set +x 00:27:04.589 07:46:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:04.589 07:46:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:04.589 07:46:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:04.589 07:46:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:04.589 07:46:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:04.589 07:46:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:04.589 07:46:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:04.589 07:46:08 -- nvmf/common.sh@294 -- # net_devs=() 00:27:04.589 07:46:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:04.589 07:46:08 -- nvmf/common.sh@295 -- # e810=() 00:27:04.589 07:46:08 -- nvmf/common.sh@295 -- # local -ga e810 00:27:04.589 07:46:08 -- nvmf/common.sh@296 -- # x722=() 00:27:04.589 07:46:08 -- nvmf/common.sh@296 -- # local -ga x722 00:27:04.589 07:46:08 -- nvmf/common.sh@297 -- # mlx=() 00:27:04.589 07:46:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:04.589 07:46:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.589 07:46:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.589 07:46:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.589 07:46:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.589 07:46:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.589 07:46:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.589 07:46:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.589 07:46:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.589 07:46:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.589 07:46:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.589 07:46:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.589 07:46:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:04.589 07:46:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:04.589 07:46:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:04.589 07:46:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:04.589 07:46:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:04.589 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:04.589 07:46:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:04.589 07:46:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:04.589 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:04.589 07:46:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:04.589 07:46:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:04.589 07:46:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.589 07:46:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:04.589 07:46:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.589 07:46:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:04.589 Found net devices under 0000:af:00.0: cvl_0_0 00:27:04.589 07:46:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.589 07:46:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:04.589 07:46:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.589 07:46:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:04.589 07:46:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.589 07:46:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:04.589 Found net devices under 0000:af:00.1: cvl_0_1 00:27:04.589 07:46:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.589 07:46:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:04.589 07:46:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:04.589 07:46:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:04.589 07:46:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:04.589 07:46:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.589 07:46:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.589 07:46:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.589 07:46:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:04.589 07:46:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.589 07:46:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.589 07:46:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:04.589 07:46:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.589 07:46:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.589 07:46:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:04.589 07:46:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:04.589 07:46:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.589 07:46:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.848 07:46:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.848 07:46:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.848 07:46:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:04.848 07:46:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.848 07:46:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.848 07:46:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.848 07:46:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:04.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:27:04.848 00:27:04.848 --- 10.0.0.2 ping statistics --- 00:27:04.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.848 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:27:04.848 07:46:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:27:04.848 00:27:04.848 --- 10.0.0.1 ping statistics --- 00:27:04.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.848 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:27:04.848 07:46:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.848 07:46:08 -- nvmf/common.sh@410 -- # return 0 00:27:04.848 07:46:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:04.848 07:46:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.848 07:46:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:04.848 07:46:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:04.848 07:46:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.848 07:46:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:04.848 07:46:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:04.848 07:46:08 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:04.848 07:46:08 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:04.848 07:46:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:04.848 07:46:08 -- common/autotest_common.sh@10 -- # set +x 00:27:04.848 07:46:08 -- host/fio.sh@24 -- # nvmfpid=61687 00:27:04.848 07:46:08 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:04.848 07:46:08 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:04.848 07:46:08 -- host/fio.sh@28 -- # waitforlisten 61687 00:27:04.848 07:46:08 -- common/autotest_common.sh@819 -- # '[' -z 61687 ']' 00:27:04.848 07:46:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.848 07:46:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:04.848 07:46:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.848 07:46:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:04.848 07:46:08 -- common/autotest_common.sh@10 -- # set +x 00:27:04.848 [2024-10-07 07:46:08.794281] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:04.848 [2024-10-07 07:46:08.794330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.108 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.108 [2024-10-07 07:46:08.854064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:05.108 [2024-10-07 07:46:08.922905] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:05.108 [2024-10-07 07:46:08.923020] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.108 [2024-10-07 07:46:08.923028] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.108 [2024-10-07 07:46:08.923034] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.108 [2024-10-07 07:46:08.923089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.108 [2024-10-07 07:46:08.923149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.108 [2024-10-07 07:46:08.923238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.108 [2024-10-07 07:46:08.923239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.676 07:46:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:05.676 07:46:09 -- common/autotest_common.sh@852 -- # return 0 00:27:05.677 07:46:09 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:05.937 [2024-10-07 07:46:09.770810] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.937 07:46:09 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:05.937 07:46:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:05.937 07:46:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.937 07:46:09 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:06.196 Malloc1 00:27:06.196 07:46:10 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:06.455 07:46:10 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:06.455 07:46:10 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.715 [2024-10-07 07:46:10.566031] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.715 07:46:10 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:06.974 07:46:10 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:06.974 07:46:10 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:06.974 07:46:10 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:06.974 07:46:10 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:06.974 07:46:10 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:06.974 07:46:10 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:06.974 07:46:10 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:06.974 07:46:10 -- common/autotest_common.sh@1320 -- # shift 00:27:06.974 07:46:10 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:06.974 07:46:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:06.974 07:46:10 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:06.974 07:46:10 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:06.974 07:46:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:06.974 07:46:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:06.974 07:46:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:06.974 07:46:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:06.974 07:46:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:06.974 07:46:10 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:06.974 07:46:10 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:06.974 07:46:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:06.974 07:46:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:06.974 07:46:10 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:06.974 07:46:10 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:07.233 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:07.233 fio-3.35 00:27:07.233 Starting 1 thread 00:27:07.233 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.771 00:27:09.771 test: (groupid=0, jobs=1): err= 0: pid=62281: Mon Oct 7 07:46:13 2024 00:27:09.771 read: IOPS=12.8k, BW=50.0MiB/s (52.5MB/s)(100MiB/2004msec) 00:27:09.771 slat (nsec): min=1520, max=237267, avg=1704.20, stdev=2156.20 00:27:09.771 clat (usec): min=3522, max=9528, avg=5531.56, stdev=395.76 00:27:09.771 lat (usec): min=3551, max=9529, avg=5533.26, stdev=395.67 00:27:09.771 clat percentiles (usec): 00:27:09.771 | 1.00th=[ 4621], 5.00th=[ 4883], 10.00th=[ 5014], 20.00th=[ 5211], 00:27:09.771 | 30.00th=[ 5342], 40.00th=[ 5407], 50.00th=[ 5538], 60.00th=[ 5604], 00:27:09.771 | 70.00th=[ 5735], 80.00th=[ 5866], 90.00th=[ 5997], 95.00th=[ 6128], 00:27:09.771 | 99.00th=[ 6456], 99.50th=[ 6521], 99.90th=[ 7832], 99.95th=[ 8455], 00:27:09.771 | 99.99th=[ 9372] 00:27:09.771 bw ( KiB/s): min=50056, max=51776, per=99.93%, avg=51198.00, stdev=777.34, samples=4 00:27:09.771 iops : min=12514, max=12944, avg=12799.50, stdev=194.34, samples=4 00:27:09.771 write: IOPS=12.8k, BW=49.9MiB/s (52.3MB/s)(100MiB/2004msec); 0 zone resets 00:27:09.771 slat (nsec): min=1560, max=236300, avg=1791.67, stdev=1666.68 00:27:09.771 clat (usec): min=2445, max=8342, avg=4419.80, stdev=330.88 00:27:09.771 lat (usec): min=2460, max=8344, avg=4421.59, stdev=330.83 00:27:09.771 clat percentiles (usec): 00:27:09.771 | 1.00th=[ 3654], 5.00th=[ 3884], 10.00th=[ 4015], 20.00th=[ 4146], 00:27:09.771 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4490], 00:27:09.771 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 4948], 00:27:09.771 | 99.00th=[ 5145], 99.50th=[ 5276], 99.90th=[ 6521], 99.95th=[ 7308], 00:27:09.771 | 99.99th=[ 7767] 00:27:09.771 bw ( KiB/s): min=50544, max=51648, per=99.98%, avg=51086.00, stdev=501.15, samples=4 00:27:09.771 iops : min=12636, max=12912, avg=12771.50, stdev=125.29, samples=4 00:27:09.771 lat (msec) : 4=4.53%, 10=95.47% 00:27:09.771 cpu : usr=66.85%, sys=28.41%, ctx=82, majf=0, minf=5 00:27:09.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:09.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:09.772 issued rwts: total=25669,25600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.772 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:09.772 00:27:09.772 Run status group 0 (all jobs): 00:27:09.772 READ: bw=50.0MiB/s (52.5MB/s), 50.0MiB/s-50.0MiB/s (52.5MB/s-52.5MB/s), io=100MiB (105MB), run=2004-2004msec 00:27:09.772 WRITE: bw=49.9MiB/s (52.3MB/s), 49.9MiB/s-49.9MiB/s (52.3MB/s-52.3MB/s), io=100MiB (105MB), run=2004-2004msec 00:27:09.772 07:46:13 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:09.772 07:46:13 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:09.772 07:46:13 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:09.772 07:46:13 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:09.772 07:46:13 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:09.772 07:46:13 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:09.772 07:46:13 -- common/autotest_common.sh@1320 -- # shift 00:27:09.772 07:46:13 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:09.772 07:46:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:09.772 07:46:13 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:09.772 07:46:13 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:09.772 07:46:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:09.772 07:46:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:09.772 07:46:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:09.772 07:46:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:09.772 07:46:13 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:09.772 07:46:13 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:09.772 07:46:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:09.772 07:46:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:09.772 07:46:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:09.772 07:46:13 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:09.772 07:46:13 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:10.030 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:10.030 fio-3.35 00:27:10.030 Starting 1 thread 00:27:10.030 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.566 00:27:12.566 test: (groupid=0, jobs=1): err= 0: pid=62845: Mon Oct 7 07:46:16 2024 00:27:12.566 read: IOPS=11.1k, BW=174MiB/s (182MB/s)(348MiB/2004msec) 00:27:12.566 slat (nsec): min=2516, max=81052, avg=2821.22, stdev=1334.86 00:27:12.566 clat (usec): min=2389, max=14483, avg=6943.75, stdev=1783.02 00:27:12.566 lat (usec): min=2392, max=14486, avg=6946.57, stdev=1783.22 00:27:12.566 clat percentiles (usec): 00:27:12.566 | 1.00th=[ 3621], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5342], 00:27:12.566 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6849], 60.00th=[ 7308], 00:27:12.566 | 70.00th=[ 7898], 80.00th=[ 8455], 90.00th=[ 9241], 95.00th=[ 9896], 00:27:12.566 | 99.00th=[11994], 99.50th=[12518], 99.90th=[13304], 99.95th=[13698], 00:27:12.566 | 99.99th=[14484] 00:27:12.566 bw ( KiB/s): min=84960, max=97280, per=50.16%, avg=89248.00, stdev=5524.86, samples=4 00:27:12.566 iops : min= 5310, max= 6080, avg=5578.00, stdev=345.30, samples=4 00:27:12.566 write: IOPS=6485, BW=101MiB/s (106MB/s)(183MiB/1802msec); 0 zone resets 00:27:12.566 slat (usec): min=29, max=373, avg=31.61, stdev= 7.12 00:27:12.566 clat (usec): min=3169, max=14768, avg=8043.12, stdev=1336.62 00:27:12.566 lat (usec): min=3198, max=14797, avg=8074.73, stdev=1338.42 00:27:12.566 clat percentiles (usec): 00:27:12.566 | 1.00th=[ 5473], 5.00th=[ 6063], 10.00th=[ 6456], 20.00th=[ 6980], 00:27:12.566 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 7898], 60.00th=[ 8225], 00:27:12.566 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[10552], 00:27:12.566 | 99.00th=[11731], 99.50th=[12256], 99.90th=[13566], 99.95th=[13829], 00:27:12.566 | 99.99th=[13829] 00:27:12.566 bw ( KiB/s): min=87488, max=101376, per=89.65%, avg=93016.00, stdev=5930.02, samples=4 00:27:12.566 iops : min= 5468, max= 6336, avg=5813.50, stdev=370.63, samples=4 00:27:12.566 lat (msec) : 4=1.75%, 10=92.41%, 20=5.84% 00:27:12.566 cpu : usr=86.47%, sys=12.03%, ctx=13, majf=0, minf=1 00:27:12.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:12.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:12.566 issued rwts: total=22284,11686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.566 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:12.566 00:27:12.566 Run status group 0 (all jobs): 00:27:12.566 READ: bw=174MiB/s (182MB/s), 174MiB/s-174MiB/s (182MB/s-182MB/s), io=348MiB (365MB), run=2004-2004msec 00:27:12.566 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=183MiB (191MB), run=1802-1802msec 00:27:12.566 07:46:16 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:12.566 07:46:16 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:12.566 07:46:16 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:12.566 07:46:16 -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:12.566 07:46:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:12.566 07:46:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:12.566 07:46:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:12.566 07:46:16 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:12.566 07:46:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:12.566 07:46:16 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:12.566 07:46:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:27:12.566 07:46:16 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:27:15.857 Nvme0n1 00:27:15.857 07:46:19 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:18.394 07:46:22 -- host/fio.sh@53 -- # ls_guid=4d08c42d-9f08-43c8-986c-501eae59f940 00:27:18.395 07:46:22 -- host/fio.sh@54 -- # get_lvs_free_mb 4d08c42d-9f08-43c8-986c-501eae59f940 00:27:18.395 07:46:22 -- common/autotest_common.sh@1343 -- # local lvs_uuid=4d08c42d-9f08-43c8-986c-501eae59f940 00:27:18.395 07:46:22 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:18.395 07:46:22 -- common/autotest_common.sh@1345 -- # local fc 00:27:18.395 07:46:22 -- common/autotest_common.sh@1346 -- # local cs 00:27:18.395 07:46:22 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:18.654 07:46:22 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:18.654 { 00:27:18.654 "uuid": "4d08c42d-9f08-43c8-986c-501eae59f940", 00:27:18.654 "name": "lvs_0", 00:27:18.654 "base_bdev": "Nvme0n1", 00:27:18.654 "total_data_clusters": 930, 00:27:18.654 "free_clusters": 930, 00:27:18.654 "block_size": 512, 00:27:18.654 "cluster_size": 1073741824 00:27:18.654 } 00:27:18.654 ]' 00:27:18.654 07:46:22 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="4d08c42d-9f08-43c8-986c-501eae59f940") .free_clusters' 00:27:18.654 07:46:22 -- common/autotest_common.sh@1348 -- # fc=930 00:27:18.654 07:46:22 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="4d08c42d-9f08-43c8-986c-501eae59f940") .cluster_size' 00:27:18.913 07:46:22 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:27:18.913 07:46:22 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:27:18.913 07:46:22 -- common/autotest_common.sh@1353 -- # echo 952320 00:27:18.913 952320 00:27:18.913 07:46:22 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:27:19.173 fa7825f0-c349-403e-9d19-1314a66d1cf9 00:27:19.173 07:46:22 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:19.432 07:46:23 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:19.432 07:46:23 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:19.691 07:46:23 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:19.691 07:46:23 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:19.691 07:46:23 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:19.691 07:46:23 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:19.691 07:46:23 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:19.691 07:46:23 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:19.691 07:46:23 -- common/autotest_common.sh@1320 -- # shift 00:27:19.691 07:46:23 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:19.691 07:46:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:19.691 07:46:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:19.691 07:46:23 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:19.691 07:46:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:19.691 07:46:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:19.691 07:46:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:19.691 07:46:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:19.691 07:46:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:19.691 07:46:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:19.691 07:46:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:19.691 07:46:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:19.691 07:46:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:19.691 07:46:23 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:19.691 07:46:23 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:19.951 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:19.951 fio-3.35 00:27:19.951 Starting 1 thread 00:27:19.951 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.486 00:27:22.486 test: (groupid=0, jobs=1): err= 0: pid=64583: Mon Oct 7 07:46:26 2024 00:27:22.486 read: IOPS=8664, BW=33.8MiB/s (35.5MB/s)(67.9MiB/2005msec) 00:27:22.486 slat (nsec): min=1513, max=102953, avg=1663.33, stdev=1036.84 00:27:22.486 clat (usec): min=610, max=170539, avg=8168.57, stdev=9985.72 00:27:22.486 lat (usec): min=612, max=170553, avg=8170.23, stdev=9985.88 00:27:22.486 clat percentiles (msec): 00:27:22.486 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:27:22.486 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:27:22.486 | 70.00th=[ 8], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:27:22.486 | 99.00th=[ 9], 99.50th=[ 11], 99.90th=[ 171], 99.95th=[ 171], 00:27:22.486 | 99.99th=[ 171] 00:27:22.486 bw ( KiB/s): min=24544, max=38112, per=99.79%, avg=34586.00, stdev=6699.22, samples=4 00:27:22.486 iops : min= 6136, max= 9528, avg=8646.50, stdev=1674.81, samples=4 00:27:22.486 write: IOPS=8652, BW=33.8MiB/s (35.4MB/s)(67.8MiB/2005msec); 0 zone resets 00:27:22.486 slat (nsec): min=1572, max=78798, avg=1734.33, stdev=674.18 00:27:22.486 clat (usec): min=195, max=168349, avg=6534.80, stdev=9281.58 00:27:22.486 lat (usec): min=197, max=168354, avg=6536.53, stdev=9281.75 00:27:22.486 clat percentiles (msec): 00:27:22.486 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:27:22.486 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 7], 00:27:22.486 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:27:22.486 | 99.00th=[ 8], 99.50th=[ 8], 99.90th=[ 169], 99.95th=[ 169], 00:27:22.486 | 99.99th=[ 169] 00:27:22.486 bw ( KiB/s): min=25768, max=37760, per=99.98%, avg=34602.00, stdev=5891.50, samples=4 00:27:22.486 iops : min= 6442, max= 9440, avg=8650.50, stdev=1472.87, samples=4 00:27:22.486 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:27:22.486 lat (msec) : 2=0.05%, 4=0.27%, 10=99.19%, 20=0.09%, 250=0.37% 00:27:22.486 cpu : usr=64.42%, sys=32.09%, ctx=109, majf=0, minf=5 00:27:22.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:22.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:22.486 issued rwts: total=17373,17348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:22.486 00:27:22.486 Run status group 0 (all jobs): 00:27:22.486 READ: bw=33.8MiB/s (35.5MB/s), 33.8MiB/s-33.8MiB/s (35.5MB/s-35.5MB/s), io=67.9MiB (71.2MB), run=2005-2005msec 00:27:22.486 WRITE: bw=33.8MiB/s (35.4MB/s), 33.8MiB/s-33.8MiB/s (35.4MB/s-35.4MB/s), io=67.8MiB (71.1MB), run=2005-2005msec 00:27:22.486 07:46:26 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:22.486 07:46:26 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:23.864 07:46:27 -- host/fio.sh@64 -- # ls_nested_guid=16ebbf78-9192-4489-88a4-bf9e36dad62a 00:27:23.864 07:46:27 -- host/fio.sh@65 -- # get_lvs_free_mb 16ebbf78-9192-4489-88a4-bf9e36dad62a 00:27:23.864 07:46:27 -- common/autotest_common.sh@1343 -- # local lvs_uuid=16ebbf78-9192-4489-88a4-bf9e36dad62a 00:27:23.864 07:46:27 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:23.864 07:46:27 -- common/autotest_common.sh@1345 -- # local fc 00:27:23.864 07:46:27 -- common/autotest_common.sh@1346 -- # local cs 00:27:23.864 07:46:27 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:23.864 07:46:27 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:23.864 { 00:27:23.865 "uuid": "4d08c42d-9f08-43c8-986c-501eae59f940", 00:27:23.865 "name": "lvs_0", 00:27:23.865 "base_bdev": "Nvme0n1", 00:27:23.865 "total_data_clusters": 930, 00:27:23.865 "free_clusters": 0, 00:27:23.865 "block_size": 512, 00:27:23.865 "cluster_size": 1073741824 00:27:23.865 }, 00:27:23.865 { 00:27:23.865 "uuid": "16ebbf78-9192-4489-88a4-bf9e36dad62a", 00:27:23.865 "name": "lvs_n_0", 00:27:23.865 "base_bdev": "fa7825f0-c349-403e-9d19-1314a66d1cf9", 00:27:23.865 "total_data_clusters": 237847, 00:27:23.865 "free_clusters": 237847, 00:27:23.865 "block_size": 512, 00:27:23.865 "cluster_size": 4194304 00:27:23.865 } 00:27:23.865 ]' 00:27:23.865 07:46:27 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="16ebbf78-9192-4489-88a4-bf9e36dad62a") .free_clusters' 00:27:23.865 07:46:27 -- common/autotest_common.sh@1348 -- # fc=237847 00:27:23.865 07:46:27 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="16ebbf78-9192-4489-88a4-bf9e36dad62a") .cluster_size' 00:27:23.865 07:46:27 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:23.865 07:46:27 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:27:23.865 07:46:27 -- common/autotest_common.sh@1353 -- # echo 951388 00:27:23.865 951388 00:27:23.865 07:46:27 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:27:24.434 e11979f3-4861-4349-bbe6-7f6c13213289 00:27:24.434 07:46:28 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:24.692 07:46:28 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:24.692 07:46:28 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:24.951 07:46:28 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:24.951 07:46:28 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:24.951 07:46:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:24.951 07:46:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:24.951 07:46:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:24.951 07:46:28 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:24.951 07:46:28 -- common/autotest_common.sh@1320 -- # shift 00:27:24.951 07:46:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:24.951 07:46:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.951 07:46:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:24.951 07:46:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:24.951 07:46:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:24.951 07:46:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:24.951 07:46:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:24.951 07:46:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.951 07:46:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:24.951 07:46:28 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:24.951 07:46:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:24.951 07:46:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:24.951 07:46:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:24.951 07:46:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:24.951 07:46:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:25.210 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:25.210 fio-3.35 00:27:25.210 Starting 1 thread 00:27:25.210 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.743 00:27:27.743 test: (groupid=0, jobs=1): err= 0: pid=65610: Mon Oct 7 07:46:31 2024 00:27:27.744 read: IOPS=8273, BW=32.3MiB/s (33.9MB/s)(64.8MiB/2006msec) 00:27:27.744 slat (nsec): min=1524, max=104736, avg=1646.42, stdev=1070.76 00:27:27.744 clat (usec): min=3053, max=14318, avg=8581.76, stdev=700.51 00:27:27.744 lat (usec): min=3057, max=14319, avg=8583.40, stdev=700.46 00:27:27.744 clat percentiles (usec): 00:27:27.744 | 1.00th=[ 6980], 5.00th=[ 7439], 10.00th=[ 7701], 20.00th=[ 8029], 00:27:27.744 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:27:27.744 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:27:27.744 | 99.00th=[10290], 99.50th=[10421], 99.90th=[11600], 99.95th=[11731], 00:27:27.744 | 99.99th=[14222] 00:27:27.744 bw ( KiB/s): min=32168, max=33784, per=99.84%, avg=33040.00, stdev=665.01, samples=4 00:27:27.744 iops : min= 8042, max= 8446, avg=8260.00, stdev=166.25, samples=4 00:27:27.744 write: IOPS=8273, BW=32.3MiB/s (33.9MB/s)(64.8MiB/2006msec); 0 zone resets 00:27:27.744 slat (nsec): min=1567, max=79857, avg=1739.60, stdev=715.59 00:27:27.744 clat (usec): min=1442, max=12889, avg=6802.25, stdev=619.12 00:27:27.744 lat (usec): min=1447, max=12891, avg=6803.99, stdev=619.10 00:27:27.744 clat percentiles (usec): 00:27:27.744 | 1.00th=[ 5342], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6325], 00:27:27.744 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6915], 00:27:27.744 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7767], 00:27:27.744 | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[ 9765], 99.95th=[10683], 00:27:27.744 | 99.99th=[12911] 00:27:27.744 bw ( KiB/s): min=32944, max=33368, per=99.99%, avg=33092.00, stdev=192.94, samples=4 00:27:27.744 iops : min= 8236, max= 8342, avg=8273.00, stdev=48.24, samples=4 00:27:27.744 lat (msec) : 2=0.01%, 4=0.11%, 10=98.91%, 20=0.98% 00:27:27.744 cpu : usr=67.23%, sys=29.38%, ctx=120, majf=0, minf=5 00:27:27.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:27.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:27.744 issued rwts: total=16596,16597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:27.744 00:27:27.744 Run status group 0 (all jobs): 00:27:27.744 READ: bw=32.3MiB/s (33.9MB/s), 32.3MiB/s-32.3MiB/s (33.9MB/s-33.9MB/s), io=64.8MiB (68.0MB), run=2006-2006msec 00:27:27.744 WRITE: bw=32.3MiB/s (33.9MB/s), 32.3MiB/s-32.3MiB/s (33.9MB/s-33.9MB/s), io=64.8MiB (68.0MB), run=2006-2006msec 00:27:27.744 07:46:31 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:27.744 07:46:31 -- host/fio.sh@74 -- # sync 00:27:27.744 07:46:31 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:32.108 07:46:35 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:32.108 07:46:35 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:34.642 07:46:38 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:34.642 07:46:38 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:36.547 07:46:40 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:36.547 07:46:40 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:36.547 07:46:40 -- host/fio.sh@86 -- # nvmftestfini 00:27:36.547 07:46:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:36.547 07:46:40 -- nvmf/common.sh@116 -- # sync 00:27:36.547 07:46:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:36.547 07:46:40 -- nvmf/common.sh@119 -- # set +e 00:27:36.548 07:46:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:36.548 07:46:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:36.548 rmmod nvme_tcp 00:27:36.548 rmmod nvme_fabrics 00:27:36.548 rmmod nvme_keyring 00:27:36.548 07:46:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:36.548 07:46:40 -- nvmf/common.sh@123 -- # set -e 00:27:36.548 07:46:40 -- nvmf/common.sh@124 -- # return 0 00:27:36.548 07:46:40 -- nvmf/common.sh@477 -- # '[' -n 61687 ']' 00:27:36.548 07:46:40 -- nvmf/common.sh@478 -- # killprocess 61687 00:27:36.548 07:46:40 -- common/autotest_common.sh@926 -- # '[' -z 61687 ']' 00:27:36.548 07:46:40 -- common/autotest_common.sh@930 -- # kill -0 61687 00:27:36.548 07:46:40 -- common/autotest_common.sh@931 -- # uname 00:27:36.548 07:46:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:36.548 07:46:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61687 00:27:36.548 07:46:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:36.548 07:46:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:36.548 07:46:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61687' 00:27:36.548 killing process with pid 61687 00:27:36.548 07:46:40 -- common/autotest_common.sh@945 -- # kill 61687 00:27:36.548 07:46:40 -- common/autotest_common.sh@950 -- # wait 61687 00:27:36.807 07:46:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:36.807 07:46:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:36.807 07:46:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:36.807 07:46:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.807 07:46:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:36.807 07:46:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.807 07:46:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.807 07:46:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.346 07:46:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:39.346 00:27:39.346 real 0m39.727s 00:27:39.346 user 2m40.544s 00:27:39.346 sys 0m8.823s 00:27:39.346 07:46:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.346 07:46:42 -- common/autotest_common.sh@10 -- # set +x 00:27:39.346 ************************************ 00:27:39.346 END TEST nvmf_fio_host 00:27:39.346 ************************************ 00:27:39.346 07:46:42 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:39.346 07:46:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:39.346 07:46:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:39.346 07:46:42 -- common/autotest_common.sh@10 -- # set +x 00:27:39.346 ************************************ 00:27:39.346 START TEST nvmf_failover 00:27:39.346 ************************************ 00:27:39.346 07:46:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:39.346 * Looking for test storage... 00:27:39.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:39.346 07:46:42 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.346 07:46:42 -- nvmf/common.sh@7 -- # uname -s 00:27:39.346 07:46:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.346 07:46:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.346 07:46:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.346 07:46:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.346 07:46:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.347 07:46:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.347 07:46:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.347 07:46:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.347 07:46:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.347 07:46:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.347 07:46:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:39.347 07:46:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:39.347 07:46:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.347 07:46:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.347 07:46:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.347 07:46:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.347 07:46:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.347 07:46:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.347 07:46:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.347 07:46:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.347 07:46:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.347 07:46:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.347 07:46:42 -- paths/export.sh@5 -- # export PATH 00:27:39.347 07:46:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.347 07:46:42 -- nvmf/common.sh@46 -- # : 0 00:27:39.347 07:46:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:39.347 07:46:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:39.347 07:46:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:39.347 07:46:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.347 07:46:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.347 07:46:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:39.347 07:46:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:39.347 07:46:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:39.347 07:46:42 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:39.347 07:46:42 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:39.347 07:46:42 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:39.347 07:46:42 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:39.347 07:46:42 -- host/failover.sh@18 -- # nvmftestinit 00:27:39.347 07:46:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:39.347 07:46:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.347 07:46:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:39.347 07:46:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:39.347 07:46:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:39.347 07:46:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.347 07:46:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.347 07:46:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.347 07:46:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:39.347 07:46:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:39.347 07:46:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:39.347 07:46:42 -- common/autotest_common.sh@10 -- # set +x 00:27:44.653 07:46:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:44.653 07:46:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:44.653 07:46:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:44.653 07:46:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:44.653 07:46:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:44.653 07:46:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:44.653 07:46:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:44.653 07:46:48 -- nvmf/common.sh@294 -- # net_devs=() 00:27:44.653 07:46:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:44.653 07:46:48 -- nvmf/common.sh@295 -- # e810=() 00:27:44.653 07:46:48 -- nvmf/common.sh@295 -- # local -ga e810 00:27:44.653 07:46:48 -- nvmf/common.sh@296 -- # x722=() 00:27:44.653 07:46:48 -- nvmf/common.sh@296 -- # local -ga x722 00:27:44.653 07:46:48 -- nvmf/common.sh@297 -- # mlx=() 00:27:44.653 07:46:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:44.653 07:46:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.653 07:46:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.653 07:46:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.653 07:46:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.653 07:46:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.653 07:46:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.653 07:46:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.653 07:46:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.653 07:46:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.653 07:46:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.653 07:46:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.653 07:46:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:44.653 07:46:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:44.653 07:46:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:44.653 07:46:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:44.653 07:46:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:44.653 07:46:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:44.653 07:46:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:44.654 07:46:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:44.654 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:44.654 07:46:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:44.654 07:46:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:44.654 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:44.654 07:46:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:44.654 07:46:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:44.654 07:46:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.654 07:46:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:44.654 07:46:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.654 07:46:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:44.654 Found net devices under 0000:af:00.0: cvl_0_0 00:27:44.654 07:46:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.654 07:46:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:44.654 07:46:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.654 07:46:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:44.654 07:46:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.654 07:46:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:44.654 Found net devices under 0000:af:00.1: cvl_0_1 00:27:44.654 07:46:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.654 07:46:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:44.654 07:46:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:44.654 07:46:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:44.654 07:46:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.654 07:46:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.654 07:46:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.654 07:46:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:44.654 07:46:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:44.654 07:46:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:44.654 07:46:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:44.654 07:46:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:44.654 07:46:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.654 07:46:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:44.654 07:46:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:44.654 07:46:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:44.654 07:46:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:44.654 07:46:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:44.654 07:46:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:44.654 07:46:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:44.654 07:46:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:44.654 07:46:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:44.654 07:46:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:44.654 07:46:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:44.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:44.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:27:44.654 00:27:44.654 --- 10.0.0.2 ping statistics --- 00:27:44.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.654 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:27:44.654 07:46:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:44.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:44.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:27:44.654 00:27:44.654 --- 10.0.0.1 ping statistics --- 00:27:44.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.654 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:27:44.654 07:46:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:44.654 07:46:48 -- nvmf/common.sh@410 -- # return 0 00:27:44.654 07:46:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:44.654 07:46:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:44.654 07:46:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:44.654 07:46:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:44.654 07:46:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:44.654 07:46:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:44.654 07:46:48 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:44.654 07:46:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:44.654 07:46:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:44.654 07:46:48 -- common/autotest_common.sh@10 -- # set +x 00:27:44.654 07:46:48 -- nvmf/common.sh@469 -- # nvmfpid=70689 00:27:44.654 07:46:48 -- nvmf/common.sh@470 -- # waitforlisten 70689 00:27:44.654 07:46:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:44.654 07:46:48 -- common/autotest_common.sh@819 -- # '[' -z 70689 ']' 00:27:44.654 07:46:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.654 07:46:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:44.654 07:46:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.654 07:46:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:44.654 07:46:48 -- common/autotest_common.sh@10 -- # set +x 00:27:44.654 [2024-10-07 07:46:48.471926] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:44.654 [2024-10-07 07:46:48.471970] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.654 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.654 [2024-10-07 07:46:48.530809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:44.654 [2024-10-07 07:46:48.607119] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:44.654 [2024-10-07 07:46:48.607226] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.654 [2024-10-07 07:46:48.607235] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.654 [2024-10-07 07:46:48.607242] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.654 [2024-10-07 07:46:48.607338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.654 [2024-10-07 07:46:48.607425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:44.654 [2024-10-07 07:46:48.607426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.592 07:46:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:45.592 07:46:49 -- common/autotest_common.sh@852 -- # return 0 00:27:45.592 07:46:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:45.592 07:46:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:45.592 07:46:49 -- common/autotest_common.sh@10 -- # set +x 00:27:45.592 07:46:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.592 07:46:49 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:45.592 [2024-10-07 07:46:49.480020] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.592 07:46:49 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:45.852 Malloc0 00:27:45.852 07:46:49 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:46.111 07:46:49 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:46.369 07:46:50 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.369 [2024-10-07 07:46:50.285263] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.369 07:46:50 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:46.628 [2024-10-07 07:46:50.477864] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:46.628 07:46:50 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:46.887 [2024-10-07 07:46:50.658461] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:46.887 07:46:50 -- host/failover.sh@31 -- # bdevperf_pid=71158 00:27:46.887 07:46:50 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:46.887 07:46:50 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:46.887 07:46:50 -- host/failover.sh@34 -- # waitforlisten 71158 /var/tmp/bdevperf.sock 00:27:46.887 07:46:50 -- common/autotest_common.sh@819 -- # '[' -z 71158 ']' 00:27:46.887 07:46:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:46.887 07:46:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:46.887 07:46:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:46.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:46.887 07:46:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:46.887 07:46:50 -- common/autotest_common.sh@10 -- # set +x 00:27:47.825 07:46:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:47.825 07:46:51 -- common/autotest_common.sh@852 -- # return 0 00:27:47.825 07:46:51 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:48.084 NVMe0n1 00:27:48.084 07:46:51 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:48.343 00:27:48.343 07:46:52 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:48.343 07:46:52 -- host/failover.sh@39 -- # run_test_pid=71393 00:27:48.343 07:46:52 -- host/failover.sh@41 -- # sleep 1 00:27:49.723 07:46:53 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.723 [2024-10-07 07:46:53.457833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.457996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.458001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.458007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.458013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.458019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.458025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.458031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.458038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.458050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.458056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.458068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.723 [2024-10-07 07:46:53.458073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 [2024-10-07 07:46:53.458264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca7c0 is same with the state(5) to be set 00:27:49.724 07:46:53 -- host/failover.sh@45 -- # sleep 3 00:27:53.013 07:46:56 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:53.013 00:27:53.013 07:46:56 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:53.274 [2024-10-07 07:46:57.049385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 [2024-10-07 07:46:57.049846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb340 is same with the state(5) to be set 00:27:53.274 07:46:57 -- host/failover.sh@50 -- # sleep 3 00:27:56.563 07:47:00 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:56.563 [2024-10-07 07:47:00.260964] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.563 07:47:00 -- host/failover.sh@55 -- # sleep 1 00:27:57.500 07:47:01 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:57.760 [2024-10-07 07:47:01.476591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.760 [2024-10-07 07:47:01.476641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.760 [2024-10-07 07:47:01.476649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.760 [2024-10-07 07:47:01.476656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.760 [2024-10-07 07:47:01.476662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.760 [2024-10-07 07:47:01.476668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.760 [2024-10-07 07:47:01.476673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 [2024-10-07 07:47:01.476945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725a80 is same with the state(5) to be set 00:27:57.761 07:47:01 -- host/failover.sh@59 -- # wait 71393 00:28:04.335 0 00:28:04.335 07:47:07 -- host/failover.sh@61 -- # killprocess 71158 00:28:04.335 07:47:07 -- common/autotest_common.sh@926 -- # '[' -z 71158 ']' 00:28:04.335 07:47:07 -- common/autotest_common.sh@930 -- # kill -0 71158 00:28:04.335 07:47:07 -- common/autotest_common.sh@931 -- # uname 00:28:04.335 07:47:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:04.335 07:47:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71158 00:28:04.335 07:47:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:04.335 07:47:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:04.335 07:47:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71158' 00:28:04.335 killing process with pid 71158 00:28:04.335 07:47:07 -- common/autotest_common.sh@945 -- # kill 71158 00:28:04.335 07:47:07 -- common/autotest_common.sh@950 -- # wait 71158 00:28:04.335 07:47:07 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:04.335 [2024-10-07 07:46:50.726201] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:04.335 [2024-10-07 07:46:50.726251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71158 ] 00:28:04.335 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.335 [2024-10-07 07:46:50.783032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.335 [2024-10-07 07:46:50.854855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.335 Running I/O for 15 seconds... 00:28:04.335 [2024-10-07 07:46:53.458524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.335 [2024-10-07 07:46:53.458562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.335 [2024-10-07 07:46:53.458578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.335 [2024-10-07 07:46:53.458586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.335 [2024-10-07 07:46:53.458595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.335 [2024-10-07 07:46:53.458602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.335 [2024-10-07 07:46:53.458611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.335 [2024-10-07 07:46:53.458618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.335 [2024-10-07 07:46:53.458626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.335 [2024-10-07 07:46:53.458633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.335 [2024-10-07 07:46:53.458641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.335 [2024-10-07 07:46:53.458648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.335 [2024-10-07 07:46:53.458656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.335 [2024-10-07 07:46:53.458663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.335 [2024-10-07 07:46:53.458671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.335 [2024-10-07 07:46:53.458678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.335 [2024-10-07 07:46:53.458686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.335 [2024-10-07 07:46:53.458693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.335 [2024-10-07 07:46:53.458701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.335 [2024-10-07 07:46:53.458707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.335 [2024-10-07 07:46:53.458715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.335 [2024-10-07 07:46:53.458722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.335 [2024-10-07 07:46:53.458736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.335 [2024-10-07 07:46:53.458742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.335 [2024-10-07 07:46:53.458750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.335 [2024-10-07 07:46:53.458759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.335 [2024-10-07 07:46:53.458767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.335 [2024-10-07 07:46:53.458774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.335 [2024-10-07 07:46:53.458782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.335 [2024-10-07 07:46:53.458789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.458796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.458803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.458811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.458818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.458826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.458833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.458841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.458847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.458855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.458861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.458869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.458876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.458885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.458891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.458899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.458906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.458913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.458921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.458930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.458936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.458944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.458951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.458959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.458966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.458974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.458981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.458989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.458995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.336 [2024-10-07 07:46:53.459281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.336 [2024-10-07 07:46:53.459312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.336 [2024-10-07 07:46:53.459342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.336 [2024-10-07 07:46:53.459378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.336 [2024-10-07 07:46:53.459385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.337 [2024-10-07 07:46:53.459492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.337 [2024-10-07 07:46:53.459507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.337 [2024-10-07 07:46:53.459521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.337 [2024-10-07 07:46:53.459536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.337 [2024-10-07 07:46:53.459566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.337 [2024-10-07 07:46:53.459653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.337 [2024-10-07 07:46:53.459682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.337 [2024-10-07 07:46:53.459843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.337 [2024-10-07 07:46:53.459874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.337 [2024-10-07 07:46:53.459917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.337 [2024-10-07 07:46:53.459931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.337 [2024-10-07 07:46:53.459946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.337 [2024-10-07 07:46:53.459969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.337 [2024-10-07 07:46:53.459975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.459983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.338 [2024-10-07 07:46:53.459989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.459997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.338 [2024-10-07 07:46:53.460019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.338 [2024-10-07 07:46:53.460034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.338 [2024-10-07 07:46:53.460067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.338 [2024-10-07 07:46:53.460081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.338 [2024-10-07 07:46:53.460125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.338 [2024-10-07 07:46:53.460289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.338 [2024-10-07 07:46:53.460303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.338 [2024-10-07 07:46:53.460318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.338 [2024-10-07 07:46:53.460348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.338 [2024-10-07 07:46:53.460451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768820 is same with the state(5) to be set 00:28:04.338 [2024-10-07 07:46:53.460467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:04.338 [2024-10-07 07:46:53.460473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:04.338 [2024-10-07 07:46:53.460481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22864 len:8 PRP1 0x0 PRP2 0x0 00:28:04.338 [2024-10-07 07:46:53.460487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460530] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1768820 was disconnected and freed. reset controller. 00:28:04.338 [2024-10-07 07:46:53.460545] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:04.338 [2024-10-07 07:46:53.460567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.338 [2024-10-07 07:46:53.460575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.338 [2024-10-07 07:46:53.460589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.338 [2024-10-07 07:46:53.460602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.338 [2024-10-07 07:46:53.460616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.338 [2024-10-07 07:46:53.460623] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.338 [2024-10-07 07:46:53.462439] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.339 [2024-10-07 07:46:53.462465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1749a40 (9): Bad file descriptor 00:28:04.339 [2024-10-07 07:46:53.536971] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:04.339 [2024-10-07 07:46:57.049978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:33272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:33280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.339 [2024-10-07 07:46:57.050491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.339 [2024-10-07 07:46:57.050497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.340 [2024-10-07 07:46:57.050570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.340 [2024-10-07 07:46:57.050599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.340 [2024-10-07 07:46:57.050659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:33552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.340 [2024-10-07 07:46:57.050807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.340 [2024-10-07 07:46:57.050821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.340 [2024-10-07 07:46:57.050879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.340 [2024-10-07 07:46:57.050908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:34176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.050936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.340 [2024-10-07 07:46:57.050951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.340 [2024-10-07 07:46:57.050965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.340 [2024-10-07 07:46:57.050988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.050996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.051003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.051012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.051018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.051026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.051033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.051040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.051047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.051056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.051067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.051075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.340 [2024-10-07 07:46:57.051082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.340 [2024-10-07 07:46:57.051089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.340 [2024-10-07 07:46:57.051095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:33744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:33768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.341 [2024-10-07 07:46:57.051626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.341 [2024-10-07 07:46:57.051685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.341 [2024-10-07 07:46:57.051693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:46:57.051701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.342 [2024-10-07 07:46:57.051715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:46:57.051729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.342 [2024-10-07 07:46:57.051745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:46:57.051760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:46:57.051774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:46:57.051790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.342 [2024-10-07 07:46:57.051804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:46:57.051818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:46:57.051833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:46:57.051847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:46:57.051861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:46:57.051875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:46:57.051889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:46:57.051904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755ef0 is same with the state(5) to be set 00:28:04.342 [2024-10-07 07:46:57.051922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:04.342 [2024-10-07 07:46:57.051927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:04.342 [2024-10-07 07:46:57.051935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34032 len:8 PRP1 0x0 PRP2 0x0 00:28:04.342 [2024-10-07 07:46:57.051942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.051983] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1755ef0 was disconnected and freed. reset controller. 00:28:04.342 [2024-10-07 07:46:57.051992] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:04.342 [2024-10-07 07:46:57.052015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.342 [2024-10-07 07:46:57.052023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.052030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.342 [2024-10-07 07:46:57.052037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.052043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.342 [2024-10-07 07:46:57.052050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.052061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.342 [2024-10-07 07:46:57.052068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:46:57.052075] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.342 [2024-10-07 07:46:57.052097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1749a40 (9): Bad file descriptor 00:28:04.342 [2024-10-07 07:46:57.054010] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.342 [2024-10-07 07:46:57.075429] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:04.342 [2024-10-07 07:47:01.477155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:47:01.477208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:47:01.477225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:47:01.477241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:47:01.477257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:47:01.477276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:47:01.477290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:47:01.477305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:47:01.477319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:47:01.477334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:47:01.477348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:47:01.477364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:47:01.477378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:47:01.477393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:47:01.477407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.342 [2024-10-07 07:47:01.477421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.342 [2024-10-07 07:47:01.477427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.343 [2024-10-07 07:47:01.477739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.343 [2024-10-07 07:47:01.477768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.343 [2024-10-07 07:47:01.477782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.343 [2024-10-07 07:47:01.477830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.343 [2024-10-07 07:47:01.477874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.343 [2024-10-07 07:47:01.477888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.343 [2024-10-07 07:47:01.477903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.477990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.477998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.343 [2024-10-07 07:47:01.478005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.343 [2024-10-07 07:47:01.478014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.344 [2024-10-07 07:47:01.478084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.344 [2024-10-07 07:47:01.478113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.344 [2024-10-07 07:47:01.478261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.344 [2024-10-07 07:47:01.478276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.344 [2024-10-07 07:47:01.478290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.344 [2024-10-07 07:47:01.478304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.344 [2024-10-07 07:47:01.478347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.344 [2024-10-07 07:47:01.478401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.344 [2024-10-07 07:47:01.478407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.345 [2024-10-07 07:47:01.478495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.345 [2024-10-07 07:47:01.478539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.345 [2024-10-07 07:47:01.478567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.345 [2024-10-07 07:47:01.478583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.345 [2024-10-07 07:47:01.478597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.345 [2024-10-07 07:47:01.478628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.345 [2024-10-07 07:47:01.478644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.345 [2024-10-07 07:47:01.478658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.345 [2024-10-07 07:47:01.478672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.345 [2024-10-07 07:47:01.478686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.345 [2024-10-07 07:47:01.478701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.345 [2024-10-07 07:47:01.478729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.345 [2024-10-07 07:47:01.478861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.345 [2024-10-07 07:47:01.478875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.345 [2024-10-07 07:47:01.478926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.345 [2024-10-07 07:47:01.478932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.346 [2024-10-07 07:47:01.478941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.346 [2024-10-07 07:47:01.478948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.346 [2024-10-07 07:47:01.478956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.346 [2024-10-07 07:47:01.478962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.346 [2024-10-07 07:47:01.478970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.346 [2024-10-07 07:47:01.478977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.346 [2024-10-07 07:47:01.478984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.346 [2024-10-07 07:47:01.478990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.346 [2024-10-07 07:47:01.478998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.346 [2024-10-07 07:47:01.479005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.346 [2024-10-07 07:47:01.479013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.346 [2024-10-07 07:47:01.479019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.346 [2024-10-07 07:47:01.479027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.346 [2024-10-07 07:47:01.479033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.346 [2024-10-07 07:47:01.479041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.346 [2024-10-07 07:47:01.479048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.346 [2024-10-07 07:47:01.479057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.346 [2024-10-07 07:47:01.479068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.346 [2024-10-07 07:47:01.479076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176c070 is same with the state(5) to be set 00:28:04.346 [2024-10-07 07:47:01.479084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:04.346 [2024-10-07 07:47:01.479090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:04.346 [2024-10-07 07:47:01.479098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10024 len:8 PRP1 0x0 PRP2 0x0 00:28:04.346 [2024-10-07 07:47:01.479104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.346 [2024-10-07 07:47:01.479146] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x176c070 was disconnected and freed. reset controller. 00:28:04.346 [2024-10-07 07:47:01.479155] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:04.346 [2024-10-07 07:47:01.479176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.346 [2024-10-07 07:47:01.479184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.346 [2024-10-07 07:47:01.479194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.346 [2024-10-07 07:47:01.479201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.346 [2024-10-07 07:47:01.479208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.346 [2024-10-07 07:47:01.479214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.346 [2024-10-07 07:47:01.479221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.346 [2024-10-07 07:47:01.479228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.346 [2024-10-07 07:47:01.479234] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.346 [2024-10-07 07:47:01.480921] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.346 [2024-10-07 07:47:01.480949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1749a40 (9): Bad file descriptor 00:28:04.346 [2024-10-07 07:47:01.544928] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:04.346 00:28:04.346 Latency(us) 00:28:04.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.346 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:04.346 Verification LBA range: start 0x0 length 0x4000 00:28:04.346 NVMe0n1 : 15.00 17362.32 67.82 776.63 0.00 7043.81 674.86 14542.75 00:28:04.346 =================================================================================================================== 00:28:04.346 Total : 17362.32 67.82 776.63 0.00 7043.81 674.86 14542.75 00:28:04.346 Received shutdown signal, test time was about 15.000000 seconds 00:28:04.346 00:28:04.346 Latency(us) 00:28:04.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.346 =================================================================================================================== 00:28:04.346 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:04.346 07:47:07 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:04.346 07:47:07 -- host/failover.sh@65 -- # count=3 00:28:04.346 07:47:07 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:04.346 07:47:07 -- host/failover.sh@73 -- # bdevperf_pid=73888 00:28:04.346 07:47:07 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:04.346 07:47:07 -- host/failover.sh@75 -- # waitforlisten 73888 /var/tmp/bdevperf.sock 00:28:04.346 07:47:07 -- common/autotest_common.sh@819 -- # '[' -z 73888 ']' 00:28:04.346 07:47:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:04.346 07:47:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:04.346 07:47:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:04.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:04.346 07:47:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:04.346 07:47:07 -- common/autotest_common.sh@10 -- # set +x 00:28:04.603 07:47:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:04.603 07:47:08 -- common/autotest_common.sh@852 -- # return 0 00:28:04.603 07:47:08 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:04.860 [2024-10-07 07:47:08.724950] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:04.860 07:47:08 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:05.119 [2024-10-07 07:47:08.913512] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:05.119 07:47:08 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:05.378 NVMe0n1 00:28:05.637 07:47:09 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:05.896 00:28:05.896 07:47:09 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:06.155 00:28:06.414 07:47:10 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:06.414 07:47:10 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:06.414 07:47:10 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:06.673 07:47:10 -- host/failover.sh@87 -- # sleep 3 00:28:09.963 07:47:13 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:09.963 07:47:13 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:09.963 07:47:13 -- host/failover.sh@90 -- # run_test_pid=74810 00:28:09.963 07:47:13 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:09.963 07:47:13 -- host/failover.sh@92 -- # wait 74810 00:28:10.901 0 00:28:10.901 07:47:14 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:10.901 [2024-10-07 07:47:07.725658] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:10.901 [2024-10-07 07:47:07.725709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73888 ] 00:28:10.901 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.901 [2024-10-07 07:47:07.781890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.901 [2024-10-07 07:47:07.847516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.901 [2024-10-07 07:47:10.492161] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:10.901 [2024-10-07 07:47:10.492212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.901 [2024-10-07 07:47:10.492223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.901 [2024-10-07 07:47:10.492232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.901 [2024-10-07 07:47:10.492239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.901 [2024-10-07 07:47:10.492246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.901 [2024-10-07 07:47:10.492252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.901 [2024-10-07 07:47:10.492259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.901 [2024-10-07 07:47:10.492267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.901 [2024-10-07 07:47:10.492274] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:10.901 [2024-10-07 07:47:10.492296] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:10.902 [2024-10-07 07:47:10.492311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eda40 (9): Bad file descriptor 00:28:10.902 [2024-10-07 07:47:10.498456] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:10.902 Running I/O for 1 seconds... 00:28:10.902 00:28:10.902 Latency(us) 00:28:10.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.902 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:10.902 Verification LBA range: start 0x0 length 0x4000 00:28:10.902 NVMe0n1 : 1.00 17384.03 67.91 0.00 0.00 7334.76 877.71 13793.77 00:28:10.902 =================================================================================================================== 00:28:10.902 Total : 17384.03 67.91 0.00 0.00 7334.76 877.71 13793.77 00:28:10.902 07:47:14 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:10.902 07:47:14 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:11.160 07:47:15 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:11.419 07:47:15 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:11.419 07:47:15 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:11.678 07:47:15 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:11.678 07:47:15 -- host/failover.sh@101 -- # sleep 3 00:28:14.968 07:47:18 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:14.968 07:47:18 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:14.968 07:47:18 -- host/failover.sh@108 -- # killprocess 73888 00:28:14.968 07:47:18 -- common/autotest_common.sh@926 -- # '[' -z 73888 ']' 00:28:14.968 07:47:18 -- common/autotest_common.sh@930 -- # kill -0 73888 00:28:14.968 07:47:18 -- common/autotest_common.sh@931 -- # uname 00:28:14.968 07:47:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:14.968 07:47:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73888 00:28:14.968 07:47:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:14.968 07:47:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:14.968 07:47:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73888' 00:28:14.968 killing process with pid 73888 00:28:14.968 07:47:18 -- common/autotest_common.sh@945 -- # kill 73888 00:28:14.968 07:47:18 -- common/autotest_common.sh@950 -- # wait 73888 00:28:15.228 07:47:19 -- host/failover.sh@110 -- # sync 00:28:15.228 07:47:19 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:15.487 07:47:19 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:15.487 07:47:19 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:15.487 07:47:19 -- host/failover.sh@116 -- # nvmftestfini 00:28:15.487 07:47:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:15.487 07:47:19 -- nvmf/common.sh@116 -- # sync 00:28:15.487 07:47:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:15.487 07:47:19 -- nvmf/common.sh@119 -- # set +e 00:28:15.487 07:47:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:15.487 07:47:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:15.487 rmmod nvme_tcp 00:28:15.487 rmmod nvme_fabrics 00:28:15.487 rmmod nvme_keyring 00:28:15.487 07:47:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:15.487 07:47:19 -- nvmf/common.sh@123 -- # set -e 00:28:15.487 07:47:19 -- nvmf/common.sh@124 -- # return 0 00:28:15.487 07:47:19 -- nvmf/common.sh@477 -- # '[' -n 70689 ']' 00:28:15.487 07:47:19 -- nvmf/common.sh@478 -- # killprocess 70689 00:28:15.487 07:47:19 -- common/autotest_common.sh@926 -- # '[' -z 70689 ']' 00:28:15.487 07:47:19 -- common/autotest_common.sh@930 -- # kill -0 70689 00:28:15.487 07:47:19 -- common/autotest_common.sh@931 -- # uname 00:28:15.487 07:47:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:15.487 07:47:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70689 00:28:15.487 07:47:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:15.487 07:47:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:15.487 07:47:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70689' 00:28:15.487 killing process with pid 70689 00:28:15.487 07:47:19 -- common/autotest_common.sh@945 -- # kill 70689 00:28:15.487 07:47:19 -- common/autotest_common.sh@950 -- # wait 70689 00:28:15.746 07:47:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:15.746 07:47:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:15.746 07:47:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:15.746 07:47:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:15.746 07:47:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:15.746 07:47:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.746 07:47:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.746 07:47:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.656 07:47:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:17.656 00:28:17.656 real 0m38.834s 00:28:17.656 user 2m5.227s 00:28:17.656 sys 0m7.960s 00:28:17.656 07:47:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:17.656 07:47:21 -- common/autotest_common.sh@10 -- # set +x 00:28:17.656 ************************************ 00:28:17.656 END TEST nvmf_failover 00:28:17.656 ************************************ 00:28:17.915 07:47:21 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:17.915 07:47:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:17.915 07:47:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:17.915 07:47:21 -- common/autotest_common.sh@10 -- # set +x 00:28:17.915 ************************************ 00:28:17.915 START TEST nvmf_discovery 00:28:17.915 ************************************ 00:28:17.915 07:47:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:17.915 * Looking for test storage... 00:28:17.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:17.915 07:47:21 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.915 07:47:21 -- nvmf/common.sh@7 -- # uname -s 00:28:17.915 07:47:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.915 07:47:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.915 07:47:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.915 07:47:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.915 07:47:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.915 07:47:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.915 07:47:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.915 07:47:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.915 07:47:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.915 07:47:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.915 07:47:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:17.915 07:47:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:17.915 07:47:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.915 07:47:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.915 07:47:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.915 07:47:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:17.915 07:47:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.915 07:47:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.915 07:47:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.915 07:47:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.915 07:47:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.915 07:47:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.915 07:47:21 -- paths/export.sh@5 -- # export PATH 00:28:17.915 07:47:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.915 07:47:21 -- nvmf/common.sh@46 -- # : 0 00:28:17.915 07:47:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:17.915 07:47:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:17.915 07:47:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:17.915 07:47:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.915 07:47:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.915 07:47:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:17.915 07:47:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:17.915 07:47:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:17.915 07:47:21 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:17.915 07:47:21 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:17.915 07:47:21 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:17.915 07:47:21 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:17.915 07:47:21 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:17.915 07:47:21 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:17.915 07:47:21 -- host/discovery.sh@25 -- # nvmftestinit 00:28:17.915 07:47:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:17.915 07:47:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.915 07:47:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:17.915 07:47:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:17.915 07:47:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:17.915 07:47:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.915 07:47:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.915 07:47:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.915 07:47:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:17.915 07:47:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:17.915 07:47:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:17.915 07:47:21 -- common/autotest_common.sh@10 -- # set +x 00:28:23.202 07:47:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:23.202 07:47:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:23.202 07:47:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:23.202 07:47:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:23.202 07:47:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:23.202 07:47:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:23.202 07:47:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:23.202 07:47:26 -- nvmf/common.sh@294 -- # net_devs=() 00:28:23.202 07:47:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:23.202 07:47:26 -- nvmf/common.sh@295 -- # e810=() 00:28:23.202 07:47:26 -- nvmf/common.sh@295 -- # local -ga e810 00:28:23.202 07:47:26 -- nvmf/common.sh@296 -- # x722=() 00:28:23.202 07:47:26 -- nvmf/common.sh@296 -- # local -ga x722 00:28:23.202 07:47:26 -- nvmf/common.sh@297 -- # mlx=() 00:28:23.202 07:47:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:23.202 07:47:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:23.202 07:47:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:23.202 07:47:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:23.202 07:47:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:23.202 07:47:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:23.202 07:47:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:23.202 07:47:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:23.202 07:47:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:23.202 07:47:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:23.202 07:47:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:23.202 07:47:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:23.202 07:47:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:23.202 07:47:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:23.202 07:47:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:23.202 07:47:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:23.202 07:47:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:23.202 07:47:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:23.202 07:47:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:23.202 07:47:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:23.202 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:23.202 07:47:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:23.202 07:47:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:23.202 07:47:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.203 07:47:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.203 07:47:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:23.203 07:47:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:23.203 07:47:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:23.203 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:23.203 07:47:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:23.203 07:47:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:23.203 07:47:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.203 07:47:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.203 07:47:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:23.203 07:47:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:23.203 07:47:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:23.203 07:47:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:23.203 07:47:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:23.203 07:47:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.203 07:47:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:23.203 07:47:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.203 07:47:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:23.203 Found net devices under 0000:af:00.0: cvl_0_0 00:28:23.203 07:47:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.203 07:47:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:23.203 07:47:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.203 07:47:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:23.203 07:47:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.203 07:47:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:23.203 Found net devices under 0000:af:00.1: cvl_0_1 00:28:23.203 07:47:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.203 07:47:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:23.203 07:47:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:23.203 07:47:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:23.203 07:47:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:23.203 07:47:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:23.203 07:47:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:23.203 07:47:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:23.203 07:47:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:23.203 07:47:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:23.203 07:47:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:23.203 07:47:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:23.203 07:47:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:23.203 07:47:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:23.203 07:47:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:23.203 07:47:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:23.203 07:47:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:23.203 07:47:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:23.203 07:47:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:23.203 07:47:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:23.203 07:47:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:23.203 07:47:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:23.203 07:47:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:23.203 07:47:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:23.203 07:47:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:23.203 07:47:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:23.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:23.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:28:23.203 00:28:23.203 --- 10.0.0.2 ping statistics --- 00:28:23.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.203 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:28:23.203 07:47:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:23.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:28:23.203 00:28:23.203 --- 10.0.0.1 ping statistics --- 00:28:23.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.203 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:28:23.203 07:47:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.203 07:47:26 -- nvmf/common.sh@410 -- # return 0 00:28:23.203 07:47:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:23.203 07:47:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.203 07:47:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:23.203 07:47:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:23.203 07:47:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.203 07:47:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:23.203 07:47:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:23.203 07:47:26 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:23.203 07:47:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:23.203 07:47:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:23.203 07:47:26 -- common/autotest_common.sh@10 -- # set +x 00:28:23.203 07:47:26 -- nvmf/common.sh@469 -- # nvmfpid=79092 00:28:23.203 07:47:26 -- nvmf/common.sh@470 -- # waitforlisten 79092 00:28:23.203 07:47:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:23.203 07:47:26 -- common/autotest_common.sh@819 -- # '[' -z 79092 ']' 00:28:23.203 07:47:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.203 07:47:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:23.203 07:47:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.203 07:47:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:23.203 07:47:26 -- common/autotest_common.sh@10 -- # set +x 00:28:23.203 [2024-10-07 07:47:26.932666] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:23.203 [2024-10-07 07:47:26.932712] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.203 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.203 [2024-10-07 07:47:26.990341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.203 [2024-10-07 07:47:27.066336] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:23.203 [2024-10-07 07:47:27.066443] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.203 [2024-10-07 07:47:27.066451] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.203 [2024-10-07 07:47:27.066457] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.203 [2024-10-07 07:47:27.066475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.141 07:47:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:24.141 07:47:27 -- common/autotest_common.sh@852 -- # return 0 00:28:24.141 07:47:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:24.141 07:47:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:24.141 07:47:27 -- common/autotest_common.sh@10 -- # set +x 00:28:24.141 07:47:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.141 07:47:27 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:24.141 07:47:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.141 07:47:27 -- common/autotest_common.sh@10 -- # set +x 00:28:24.141 [2024-10-07 07:47:27.788542] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.141 07:47:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.141 07:47:27 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:24.141 07:47:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.141 07:47:27 -- common/autotest_common.sh@10 -- # set +x 00:28:24.141 [2024-10-07 07:47:27.796683] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:24.141 07:47:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.141 07:47:27 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:24.141 07:47:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.141 07:47:27 -- common/autotest_common.sh@10 -- # set +x 00:28:24.141 null0 00:28:24.141 07:47:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.141 07:47:27 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:24.141 07:47:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.141 07:47:27 -- common/autotest_common.sh@10 -- # set +x 00:28:24.141 null1 00:28:24.141 07:47:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.141 07:47:27 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:24.141 07:47:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.141 07:47:27 -- common/autotest_common.sh@10 -- # set +x 00:28:24.141 07:47:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.141 07:47:27 -- host/discovery.sh@45 -- # hostpid=79220 00:28:24.141 07:47:27 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:24.141 07:47:27 -- host/discovery.sh@46 -- # waitforlisten 79220 /tmp/host.sock 00:28:24.141 07:47:27 -- common/autotest_common.sh@819 -- # '[' -z 79220 ']' 00:28:24.141 07:47:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:24.141 07:47:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:24.141 07:47:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:24.141 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:24.141 07:47:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:24.141 07:47:27 -- common/autotest_common.sh@10 -- # set +x 00:28:24.141 [2024-10-07 07:47:27.867822] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:24.141 [2024-10-07 07:47:27.867863] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79220 ] 00:28:24.141 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.141 [2024-10-07 07:47:27.924428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.141 [2024-10-07 07:47:27.998594] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:24.141 [2024-10-07 07:47:27.998712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.117 07:47:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:25.117 07:47:28 -- common/autotest_common.sh@852 -- # return 0 00:28:25.117 07:47:28 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:25.117 07:47:28 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:25.117 07:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.117 07:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:25.117 07:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.117 07:47:28 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:25.117 07:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.117 07:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:25.117 07:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.117 07:47:28 -- host/discovery.sh@72 -- # notify_id=0 00:28:25.117 07:47:28 -- host/discovery.sh@78 -- # get_subsystem_names 00:28:25.117 07:47:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:25.117 07:47:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:25.117 07:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.117 07:47:28 -- host/discovery.sh@59 -- # sort 00:28:25.117 07:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:25.117 07:47:28 -- host/discovery.sh@59 -- # xargs 00:28:25.117 07:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.117 07:47:28 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:28:25.117 07:47:28 -- host/discovery.sh@79 -- # get_bdev_list 00:28:25.117 07:47:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:25.117 07:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.117 07:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:25.117 07:47:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:25.117 07:47:28 -- host/discovery.sh@55 -- # sort 00:28:25.117 07:47:28 -- host/discovery.sh@55 -- # xargs 00:28:25.117 07:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.117 07:47:28 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:28:25.117 07:47:28 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:25.117 07:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.117 07:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:25.117 07:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.117 07:47:28 -- host/discovery.sh@82 -- # get_subsystem_names 00:28:25.117 07:47:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:25.117 07:47:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:25.117 07:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.117 07:47:28 -- host/discovery.sh@59 -- # sort 00:28:25.117 07:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:25.117 07:47:28 -- host/discovery.sh@59 -- # xargs 00:28:25.117 07:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.117 07:47:28 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:28:25.117 07:47:28 -- host/discovery.sh@83 -- # get_bdev_list 00:28:25.117 07:47:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:25.117 07:47:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:25.117 07:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.117 07:47:28 -- host/discovery.sh@55 -- # sort 00:28:25.117 07:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:25.117 07:47:28 -- host/discovery.sh@55 -- # xargs 00:28:25.117 07:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.117 07:47:28 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:25.117 07:47:28 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:25.117 07:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.117 07:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:25.117 07:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.117 07:47:28 -- host/discovery.sh@86 -- # get_subsystem_names 00:28:25.117 07:47:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:25.117 07:47:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:25.117 07:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.117 07:47:28 -- host/discovery.sh@59 -- # sort 00:28:25.117 07:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:25.117 07:47:28 -- host/discovery.sh@59 -- # xargs 00:28:25.117 07:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.117 07:47:28 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:28:25.117 07:47:28 -- host/discovery.sh@87 -- # get_bdev_list 00:28:25.117 07:47:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:25.117 07:47:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:25.117 07:47:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.117 07:47:28 -- host/discovery.sh@55 -- # sort 00:28:25.117 07:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:25.117 07:47:28 -- host/discovery.sh@55 -- # xargs 00:28:25.117 07:47:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.117 07:47:29 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:25.117 07:47:29 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:25.117 07:47:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.117 07:47:29 -- common/autotest_common.sh@10 -- # set +x 00:28:25.117 [2024-10-07 07:47:29.007865] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.117 07:47:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.117 07:47:29 -- host/discovery.sh@92 -- # get_subsystem_names 00:28:25.117 07:47:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:25.118 07:47:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.118 07:47:29 -- common/autotest_common.sh@10 -- # set +x 00:28:25.118 07:47:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:25.118 07:47:29 -- host/discovery.sh@59 -- # sort 00:28:25.118 07:47:29 -- host/discovery.sh@59 -- # xargs 00:28:25.118 07:47:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.440 07:47:29 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:25.440 07:47:29 -- host/discovery.sh@93 -- # get_bdev_list 00:28:25.440 07:47:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:25.440 07:47:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:25.440 07:47:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.440 07:47:29 -- host/discovery.sh@55 -- # sort 00:28:25.440 07:47:29 -- common/autotest_common.sh@10 -- # set +x 00:28:25.440 07:47:29 -- host/discovery.sh@55 -- # xargs 00:28:25.440 07:47:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.440 07:47:29 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:28:25.440 07:47:29 -- host/discovery.sh@94 -- # get_notification_count 00:28:25.440 07:47:29 -- host/discovery.sh@74 -- # jq '. | length' 00:28:25.440 07:47:29 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:25.440 07:47:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.440 07:47:29 -- common/autotest_common.sh@10 -- # set +x 00:28:25.440 07:47:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.440 07:47:29 -- host/discovery.sh@74 -- # notification_count=0 00:28:25.440 07:47:29 -- host/discovery.sh@75 -- # notify_id=0 00:28:25.440 07:47:29 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:28:25.440 07:47:29 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:25.440 07:47:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.440 07:47:29 -- common/autotest_common.sh@10 -- # set +x 00:28:25.440 07:47:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.440 07:47:29 -- host/discovery.sh@100 -- # sleep 1 00:28:26.017 [2024-10-07 07:47:29.706153] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:26.017 [2024-10-07 07:47:29.706175] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:26.017 [2024-10-07 07:47:29.706190] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:26.017 [2024-10-07 07:47:29.792444] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:26.275 [2024-10-07 07:47:30.011618] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:26.275 [2024-10-07 07:47:30.011641] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:26.275 07:47:30 -- host/discovery.sh@101 -- # get_subsystem_names 00:28:26.275 07:47:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:26.275 07:47:30 -- host/discovery.sh@59 -- # sort 00:28:26.275 07:47:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.275 07:47:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:26.275 07:47:30 -- common/autotest_common.sh@10 -- # set +x 00:28:26.275 07:47:30 -- host/discovery.sh@59 -- # xargs 00:28:26.275 07:47:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.275 07:47:30 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.275 07:47:30 -- host/discovery.sh@102 -- # get_bdev_list 00:28:26.275 07:47:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:26.275 07:47:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:26.275 07:47:30 -- host/discovery.sh@55 -- # sort 00:28:26.275 07:47:30 -- host/discovery.sh@55 -- # xargs 00:28:26.275 07:47:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.275 07:47:30 -- common/autotest_common.sh@10 -- # set +x 00:28:26.275 07:47:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.275 07:47:30 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:26.275 07:47:30 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:28:26.275 07:47:30 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:26.275 07:47:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.275 07:47:30 -- common/autotest_common.sh@10 -- # set +x 00:28:26.275 07:47:30 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:26.275 07:47:30 -- host/discovery.sh@63 -- # sort -n 00:28:26.275 07:47:30 -- host/discovery.sh@63 -- # xargs 00:28:26.533 07:47:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.533 07:47:30 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:28:26.533 07:47:30 -- host/discovery.sh@104 -- # get_notification_count 00:28:26.533 07:47:30 -- host/discovery.sh@74 -- # jq '. | length' 00:28:26.533 07:47:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:26.533 07:47:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.533 07:47:30 -- common/autotest_common.sh@10 -- # set +x 00:28:26.533 07:47:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.533 07:47:30 -- host/discovery.sh@74 -- # notification_count=1 00:28:26.533 07:47:30 -- host/discovery.sh@75 -- # notify_id=1 00:28:26.533 07:47:30 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:28:26.533 07:47:30 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:26.533 07:47:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.533 07:47:30 -- common/autotest_common.sh@10 -- # set +x 00:28:26.533 07:47:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.533 07:47:30 -- host/discovery.sh@109 -- # sleep 1 00:28:27.466 07:47:31 -- host/discovery.sh@110 -- # get_bdev_list 00:28:27.466 07:47:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:27.466 07:47:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:27.466 07:47:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.466 07:47:31 -- host/discovery.sh@55 -- # sort 00:28:27.466 07:47:31 -- common/autotest_common.sh@10 -- # set +x 00:28:27.466 07:47:31 -- host/discovery.sh@55 -- # xargs 00:28:27.466 07:47:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.466 07:47:31 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:27.466 07:47:31 -- host/discovery.sh@111 -- # get_notification_count 00:28:27.466 07:47:31 -- host/discovery.sh@74 -- # jq '. | length' 00:28:27.466 07:47:31 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:27.466 07:47:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.466 07:47:31 -- common/autotest_common.sh@10 -- # set +x 00:28:27.466 07:47:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.466 07:47:31 -- host/discovery.sh@74 -- # notification_count=1 00:28:27.466 07:47:31 -- host/discovery.sh@75 -- # notify_id=2 00:28:27.466 07:47:31 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:28:27.466 07:47:31 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:27.466 07:47:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.466 07:47:31 -- common/autotest_common.sh@10 -- # set +x 00:28:27.466 [2024-10-07 07:47:31.406484] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:27.466 [2024-10-07 07:47:31.407406] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:27.466 [2024-10-07 07:47:31.407430] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:27.466 07:47:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.466 07:47:31 -- host/discovery.sh@117 -- # sleep 1 00:28:27.724 [2024-10-07 07:47:31.493674] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:27.724 [2024-10-07 07:47:31.593312] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:27.724 [2024-10-07 07:47:31.593328] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:27.724 [2024-10-07 07:47:31.593333] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:28.660 07:47:32 -- host/discovery.sh@118 -- # get_subsystem_names 00:28:28.660 07:47:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:28.660 07:47:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:28.660 07:47:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.660 07:47:32 -- host/discovery.sh@59 -- # sort 00:28:28.660 07:47:32 -- common/autotest_common.sh@10 -- # set +x 00:28:28.660 07:47:32 -- host/discovery.sh@59 -- # xargs 00:28:28.660 07:47:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.660 07:47:32 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.660 07:47:32 -- host/discovery.sh@119 -- # get_bdev_list 00:28:28.660 07:47:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:28.660 07:47:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:28.660 07:47:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.660 07:47:32 -- host/discovery.sh@55 -- # sort 00:28:28.660 07:47:32 -- common/autotest_common.sh@10 -- # set +x 00:28:28.660 07:47:32 -- host/discovery.sh@55 -- # xargs 00:28:28.660 07:47:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.660 07:47:32 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:28.660 07:47:32 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:28:28.660 07:47:32 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:28.660 07:47:32 -- host/discovery.sh@63 -- # xargs 00:28:28.660 07:47:32 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:28.660 07:47:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.660 07:47:32 -- host/discovery.sh@63 -- # sort -n 00:28:28.660 07:47:32 -- common/autotest_common.sh@10 -- # set +x 00:28:28.660 07:47:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.660 07:47:32 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:28.660 07:47:32 -- host/discovery.sh@121 -- # get_notification_count 00:28:28.660 07:47:32 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:28.660 07:47:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.660 07:47:32 -- common/autotest_common.sh@10 -- # set +x 00:28:28.660 07:47:32 -- host/discovery.sh@74 -- # jq '. | length' 00:28:28.660 07:47:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.660 07:47:32 -- host/discovery.sh@74 -- # notification_count=0 00:28:28.660 07:47:32 -- host/discovery.sh@75 -- # notify_id=2 00:28:28.660 07:47:32 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:28:28.660 07:47:32 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:28.660 07:47:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.660 07:47:32 -- common/autotest_common.sh@10 -- # set +x 00:28:28.660 [2024-10-07 07:47:32.606200] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:28.660 [2024-10-07 07:47:32.606226] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:28.660 07:47:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.660 07:47:32 -- host/discovery.sh@127 -- # sleep 1 00:28:28.660 [2024-10-07 07:47:32.615001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.660 [2024-10-07 07:47:32.615019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.660 [2024-10-07 07:47:32.615027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.660 [2024-10-07 07:47:32.615034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.660 [2024-10-07 07:47:32.615042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.660 [2024-10-07 07:47:32.615048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.660 [2024-10-07 07:47:32.615055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.660 [2024-10-07 07:47:32.615066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.660 [2024-10-07 07:47:32.615074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e6670 is same with the state(5) to be set 00:28:28.660 [2024-10-07 07:47:32.625016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e6670 (9): Bad file descriptor 00:28:28.920 [2024-10-07 07:47:32.635054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:28.920 [2024-10-07 07:47:32.635295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.920 [2024-10-07 07:47:32.635512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.920 [2024-10-07 07:47:32.635525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e6670 with addr=10.0.0.2, port=4420 00:28:28.920 [2024-10-07 07:47:32.635533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e6670 is same with the state(5) to be set 00:28:28.920 [2024-10-07 07:47:32.635545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e6670 (9): Bad file descriptor 00:28:28.920 [2024-10-07 07:47:32.635563] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:28.920 [2024-10-07 07:47:32.635570] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:28.920 [2024-10-07 07:47:32.635578] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:28.920 [2024-10-07 07:47:32.635589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.920 [2024-10-07 07:47:32.645110] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:28.920 [2024-10-07 07:47:32.645322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.920 [2024-10-07 07:47:32.645504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.920 [2024-10-07 07:47:32.645516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e6670 with addr=10.0.0.2, port=4420 00:28:28.920 [2024-10-07 07:47:32.645525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e6670 is same with the state(5) to be set 00:28:28.920 [2024-10-07 07:47:32.645535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e6670 (9): Bad file descriptor 00:28:28.920 [2024-10-07 07:47:32.645545] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:28.920 [2024-10-07 07:47:32.645555] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:28.920 [2024-10-07 07:47:32.645561] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:28.920 [2024-10-07 07:47:32.645570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.920 [2024-10-07 07:47:32.655159] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:28.920 [2024-10-07 07:47:32.655389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.920 [2024-10-07 07:47:32.655635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.920 [2024-10-07 07:47:32.655647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e6670 with addr=10.0.0.2, port=4420 00:28:28.920 [2024-10-07 07:47:32.655656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e6670 is same with the state(5) to be set 00:28:28.920 [2024-10-07 07:47:32.655668] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e6670 (9): Bad file descriptor 00:28:28.920 [2024-10-07 07:47:32.655684] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:28.920 [2024-10-07 07:47:32.655691] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:28.920 [2024-10-07 07:47:32.655698] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:28.920 [2024-10-07 07:47:32.655707] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.920 [2024-10-07 07:47:32.665212] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:28.920 [2024-10-07 07:47:32.665495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.920 [2024-10-07 07:47:32.665697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.920 [2024-10-07 07:47:32.665709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e6670 with addr=10.0.0.2, port=4420 00:28:28.920 [2024-10-07 07:47:32.665717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e6670 is same with the state(5) to be set 00:28:28.920 [2024-10-07 07:47:32.665727] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e6670 (9): Bad file descriptor 00:28:28.920 [2024-10-07 07:47:32.665744] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:28.920 [2024-10-07 07:47:32.665751] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:28.920 [2024-10-07 07:47:32.665757] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:28.920 [2024-10-07 07:47:32.665766] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.920 [2024-10-07 07:47:32.675261] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:28.920 [2024-10-07 07:47:32.675536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.920 [2024-10-07 07:47:32.675692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.921 [2024-10-07 07:47:32.675703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e6670 with addr=10.0.0.2, port=4420 00:28:28.921 [2024-10-07 07:47:32.675711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e6670 is same with the state(5) to be set 00:28:28.921 [2024-10-07 07:47:32.675721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e6670 (9): Bad file descriptor 00:28:28.921 [2024-10-07 07:47:32.675731] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:28.921 [2024-10-07 07:47:32.675737] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:28.921 [2024-10-07 07:47:32.675747] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:28.921 [2024-10-07 07:47:32.675756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.921 [2024-10-07 07:47:32.685310] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:28.921 [2024-10-07 07:47:32.685528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.921 [2024-10-07 07:47:32.685727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.921 [2024-10-07 07:47:32.685739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e6670 with addr=10.0.0.2, port=4420 00:28:28.921 [2024-10-07 07:47:32.685747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e6670 is same with the state(5) to be set 00:28:28.921 [2024-10-07 07:47:32.685758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e6670 (9): Bad file descriptor 00:28:28.921 [2024-10-07 07:47:32.685767] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:28.921 [2024-10-07 07:47:32.685773] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:28.921 [2024-10-07 07:47:32.685779] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:28.921 [2024-10-07 07:47:32.685788] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.921 [2024-10-07 07:47:32.695360] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:28.921 [2024-10-07 07:47:32.695631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.921 [2024-10-07 07:47:32.695862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.921 [2024-10-07 07:47:32.695874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e6670 with addr=10.0.0.2, port=4420 00:28:28.921 [2024-10-07 07:47:32.695882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e6670 is same with the state(5) to be set 00:28:28.921 [2024-10-07 07:47:32.695892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e6670 (9): Bad file descriptor 00:28:28.921 [2024-10-07 07:47:32.695909] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:28.921 [2024-10-07 07:47:32.695916] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:28.921 [2024-10-07 07:47:32.695922] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:28.921 [2024-10-07 07:47:32.695931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.921 [2024-10-07 07:47:32.705411] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:28.921 [2024-10-07 07:47:32.705581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.921 [2024-10-07 07:47:32.705789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.921 [2024-10-07 07:47:32.705800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e6670 with addr=10.0.0.2, port=4420 00:28:28.921 [2024-10-07 07:47:32.705807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e6670 is same with the state(5) to be set 00:28:28.921 [2024-10-07 07:47:32.705818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e6670 (9): Bad file descriptor 00:28:28.921 [2024-10-07 07:47:32.705828] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:28.921 [2024-10-07 07:47:32.705834] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:28.921 [2024-10-07 07:47:32.705841] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:28.921 [2024-10-07 07:47:32.705855] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.921 [2024-10-07 07:47:32.715460] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:28.921 [2024-10-07 07:47:32.715673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.921 [2024-10-07 07:47:32.715891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.921 [2024-10-07 07:47:32.715902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e6670 with addr=10.0.0.2, port=4420 00:28:28.921 [2024-10-07 07:47:32.715910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e6670 is same with the state(5) to be set 00:28:28.921 [2024-10-07 07:47:32.715921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e6670 (9): Bad file descriptor 00:28:28.921 [2024-10-07 07:47:32.715930] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:28.921 [2024-10-07 07:47:32.715936] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:28.921 [2024-10-07 07:47:32.715943] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:28.921 [2024-10-07 07:47:32.715952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.921 [2024-10-07 07:47:32.725509] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:28.921 [2024-10-07 07:47:32.725727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.921 [2024-10-07 07:47:32.726005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.921 [2024-10-07 07:47:32.726017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e6670 with addr=10.0.0.2, port=4420 00:28:28.921 [2024-10-07 07:47:32.726024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e6670 is same with the state(5) to be set 00:28:28.921 [2024-10-07 07:47:32.726034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e6670 (9): Bad file descriptor 00:28:28.921 [2024-10-07 07:47:32.726051] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:28.921 [2024-10-07 07:47:32.726062] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:28.921 [2024-10-07 07:47:32.726069] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:28.921 [2024-10-07 07:47:32.726078] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.921 [2024-10-07 07:47:32.733075] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:28.921 [2024-10-07 07:47:32.733091] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:29.857 07:47:33 -- host/discovery.sh@128 -- # get_subsystem_names 00:28:29.857 07:47:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:29.857 07:47:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.857 07:47:33 -- common/autotest_common.sh@10 -- # set +x 00:28:29.857 07:47:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:29.857 07:47:33 -- host/discovery.sh@59 -- # sort 00:28:29.857 07:47:33 -- host/discovery.sh@59 -- # xargs 00:28:29.857 07:47:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.857 07:47:33 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.857 07:47:33 -- host/discovery.sh@129 -- # get_bdev_list 00:28:29.857 07:47:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:29.857 07:47:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:29.857 07:47:33 -- host/discovery.sh@55 -- # xargs 00:28:29.857 07:47:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.857 07:47:33 -- host/discovery.sh@55 -- # sort 00:28:29.857 07:47:33 -- common/autotest_common.sh@10 -- # set +x 00:28:29.857 07:47:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.857 07:47:33 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:29.857 07:47:33 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:28:29.857 07:47:33 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:29.857 07:47:33 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:29.857 07:47:33 -- host/discovery.sh@63 -- # sort -n 00:28:29.857 07:47:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.857 07:47:33 -- host/discovery.sh@63 -- # xargs 00:28:29.857 07:47:33 -- common/autotest_common.sh@10 -- # set +x 00:28:29.857 07:47:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.857 07:47:33 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:28:29.857 07:47:33 -- host/discovery.sh@131 -- # get_notification_count 00:28:29.857 07:47:33 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:29.857 07:47:33 -- host/discovery.sh@74 -- # jq '. | length' 00:28:29.857 07:47:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.857 07:47:33 -- common/autotest_common.sh@10 -- # set +x 00:28:29.857 07:47:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.857 07:47:33 -- host/discovery.sh@74 -- # notification_count=0 00:28:29.857 07:47:33 -- host/discovery.sh@75 -- # notify_id=2 00:28:29.857 07:47:33 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:28:29.857 07:47:33 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:29.857 07:47:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.857 07:47:33 -- common/autotest_common.sh@10 -- # set +x 00:28:29.857 07:47:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.857 07:47:33 -- host/discovery.sh@135 -- # sleep 1 00:28:31.231 07:47:34 -- host/discovery.sh@136 -- # get_subsystem_names 00:28:31.231 07:47:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:31.231 07:47:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:31.232 07:47:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.232 07:47:34 -- host/discovery.sh@59 -- # sort 00:28:31.232 07:47:34 -- common/autotest_common.sh@10 -- # set +x 00:28:31.232 07:47:34 -- host/discovery.sh@59 -- # xargs 00:28:31.232 07:47:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.232 07:47:34 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:28:31.232 07:47:34 -- host/discovery.sh@137 -- # get_bdev_list 00:28:31.232 07:47:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:31.232 07:47:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:31.232 07:47:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.232 07:47:34 -- host/discovery.sh@55 -- # sort 00:28:31.232 07:47:34 -- common/autotest_common.sh@10 -- # set +x 00:28:31.232 07:47:34 -- host/discovery.sh@55 -- # xargs 00:28:31.232 07:47:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.232 07:47:34 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:28:31.232 07:47:34 -- host/discovery.sh@138 -- # get_notification_count 00:28:31.232 07:47:34 -- host/discovery.sh@74 -- # jq '. | length' 00:28:31.232 07:47:34 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:31.232 07:47:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.232 07:47:34 -- common/autotest_common.sh@10 -- # set +x 00:28:31.232 07:47:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.232 07:47:34 -- host/discovery.sh@74 -- # notification_count=2 00:28:31.232 07:47:34 -- host/discovery.sh@75 -- # notify_id=4 00:28:31.232 07:47:34 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:28:31.232 07:47:34 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:31.232 07:47:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.232 07:47:34 -- common/autotest_common.sh@10 -- # set +x 00:28:32.166 [2024-10-07 07:47:35.993604] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:32.166 [2024-10-07 07:47:35.993621] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:32.166 [2024-10-07 07:47:35.993631] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:32.166 [2024-10-07 07:47:36.120015] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:32.425 [2024-10-07 07:47:36.226355] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:32.425 [2024-10-07 07:47:36.226392] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:32.425 07:47:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:32.425 07:47:36 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:32.425 07:47:36 -- common/autotest_common.sh@640 -- # local es=0 00:28:32.425 07:47:36 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:32.425 07:47:36 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:32.425 07:47:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:32.425 07:47:36 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:32.425 07:47:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:32.425 07:47:36 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:32.425 07:47:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.425 07:47:36 -- common/autotest_common.sh@10 -- # set +x 00:28:32.425 request: 00:28:32.425 { 00:28:32.425 "name": "nvme", 00:28:32.425 "trtype": "tcp", 00:28:32.425 "traddr": "10.0.0.2", 00:28:32.425 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:32.425 "adrfam": "ipv4", 00:28:32.425 "trsvcid": "8009", 00:28:32.425 "wait_for_attach": true, 00:28:32.425 "method": "bdev_nvme_start_discovery", 00:28:32.425 "req_id": 1 00:28:32.425 } 00:28:32.425 Got JSON-RPC error response 00:28:32.425 response: 00:28:32.425 { 00:28:32.425 "code": -17, 00:28:32.425 "message": "File exists" 00:28:32.425 } 00:28:32.425 07:47:36 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:32.425 07:47:36 -- common/autotest_common.sh@643 -- # es=1 00:28:32.425 07:47:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:32.425 07:47:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:32.425 07:47:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:32.425 07:47:36 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:28:32.425 07:47:36 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:32.425 07:47:36 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:32.425 07:47:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.425 07:47:36 -- host/discovery.sh@67 -- # sort 00:28:32.425 07:47:36 -- common/autotest_common.sh@10 -- # set +x 00:28:32.425 07:47:36 -- host/discovery.sh@67 -- # xargs 00:28:32.425 07:47:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:32.425 07:47:36 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:28:32.425 07:47:36 -- host/discovery.sh@147 -- # get_bdev_list 00:28:32.425 07:47:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:32.425 07:47:36 -- host/discovery.sh@55 -- # xargs 00:28:32.425 07:47:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:32.425 07:47:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.425 07:47:36 -- host/discovery.sh@55 -- # sort 00:28:32.425 07:47:36 -- common/autotest_common.sh@10 -- # set +x 00:28:32.425 07:47:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:32.425 07:47:36 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:32.425 07:47:36 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:32.425 07:47:36 -- common/autotest_common.sh@640 -- # local es=0 00:28:32.425 07:47:36 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:32.425 07:47:36 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:32.425 07:47:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:32.425 07:47:36 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:32.425 07:47:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:32.425 07:47:36 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:32.425 07:47:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.425 07:47:36 -- common/autotest_common.sh@10 -- # set +x 00:28:32.425 request: 00:28:32.425 { 00:28:32.425 "name": "nvme_second", 00:28:32.425 "trtype": "tcp", 00:28:32.425 "traddr": "10.0.0.2", 00:28:32.425 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:32.425 "adrfam": "ipv4", 00:28:32.425 "trsvcid": "8009", 00:28:32.425 "wait_for_attach": true, 00:28:32.425 "method": "bdev_nvme_start_discovery", 00:28:32.425 "req_id": 1 00:28:32.425 } 00:28:32.425 Got JSON-RPC error response 00:28:32.425 response: 00:28:32.425 { 00:28:32.425 "code": -17, 00:28:32.425 "message": "File exists" 00:28:32.425 } 00:28:32.425 07:47:36 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:32.425 07:47:36 -- common/autotest_common.sh@643 -- # es=1 00:28:32.425 07:47:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:32.425 07:47:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:32.425 07:47:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:32.425 07:47:36 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:28:32.425 07:47:36 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:32.425 07:47:36 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:32.425 07:47:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.425 07:47:36 -- common/autotest_common.sh@10 -- # set +x 00:28:32.425 07:47:36 -- host/discovery.sh@67 -- # sort 00:28:32.425 07:47:36 -- host/discovery.sh@67 -- # xargs 00:28:32.425 07:47:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:32.684 07:47:36 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:28:32.684 07:47:36 -- host/discovery.sh@153 -- # get_bdev_list 00:28:32.684 07:47:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:32.684 07:47:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:32.684 07:47:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.684 07:47:36 -- host/discovery.sh@55 -- # sort 00:28:32.684 07:47:36 -- common/autotest_common.sh@10 -- # set +x 00:28:32.684 07:47:36 -- host/discovery.sh@55 -- # xargs 00:28:32.684 07:47:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:32.684 07:47:36 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:32.684 07:47:36 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:32.684 07:47:36 -- common/autotest_common.sh@640 -- # local es=0 00:28:32.684 07:47:36 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:32.684 07:47:36 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:32.684 07:47:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:32.684 07:47:36 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:32.684 07:47:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:32.684 07:47:36 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:32.684 07:47:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.684 07:47:36 -- common/autotest_common.sh@10 -- # set +x 00:28:33.619 [2024-10-07 07:47:37.457833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-10-07 07:47:37.458048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-10-07 07:47:37.458064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e4bf0 with addr=10.0.0.2, port=8010 00:28:33.619 [2024-10-07 07:47:37.458077] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:33.619 [2024-10-07 07:47:37.458083] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:33.619 [2024-10-07 07:47:37.458089] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:34.557 [2024-10-07 07:47:38.460265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.557 [2024-10-07 07:47:38.460511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.557 [2024-10-07 07:47:38.460523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e4bf0 with addr=10.0.0.2, port=8010 00:28:34.557 [2024-10-07 07:47:38.460535] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:34.557 [2024-10-07 07:47:38.460541] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:34.557 [2024-10-07 07:47:38.460547] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:35.492 [2024-10-07 07:47:39.462385] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:35.750 request: 00:28:35.750 { 00:28:35.750 "name": "nvme_second", 00:28:35.750 "trtype": "tcp", 00:28:35.750 "traddr": "10.0.0.2", 00:28:35.750 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:35.750 "adrfam": "ipv4", 00:28:35.750 "trsvcid": "8010", 00:28:35.750 "attach_timeout_ms": 3000, 00:28:35.750 "method": "bdev_nvme_start_discovery", 00:28:35.750 "req_id": 1 00:28:35.750 } 00:28:35.750 Got JSON-RPC error response 00:28:35.750 response: 00:28:35.750 { 00:28:35.750 "code": -110, 00:28:35.750 "message": "Connection timed out" 00:28:35.750 } 00:28:35.750 07:47:39 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:35.750 07:47:39 -- common/autotest_common.sh@643 -- # es=1 00:28:35.750 07:47:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:35.750 07:47:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:35.750 07:47:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:35.750 07:47:39 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:28:35.750 07:47:39 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:35.750 07:47:39 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:35.750 07:47:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.750 07:47:39 -- host/discovery.sh@67 -- # sort 00:28:35.750 07:47:39 -- common/autotest_common.sh@10 -- # set +x 00:28:35.750 07:47:39 -- host/discovery.sh@67 -- # xargs 00:28:35.750 07:47:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.750 07:47:39 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:28:35.750 07:47:39 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:28:35.750 07:47:39 -- host/discovery.sh@162 -- # kill 79220 00:28:35.750 07:47:39 -- host/discovery.sh@163 -- # nvmftestfini 00:28:35.750 07:47:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:35.750 07:47:39 -- nvmf/common.sh@116 -- # sync 00:28:35.750 07:47:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:35.750 07:47:39 -- nvmf/common.sh@119 -- # set +e 00:28:35.750 07:47:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:35.750 07:47:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:35.750 rmmod nvme_tcp 00:28:35.750 rmmod nvme_fabrics 00:28:35.750 rmmod nvme_keyring 00:28:35.750 07:47:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:35.750 07:47:39 -- nvmf/common.sh@123 -- # set -e 00:28:35.750 07:47:39 -- nvmf/common.sh@124 -- # return 0 00:28:35.750 07:47:39 -- nvmf/common.sh@477 -- # '[' -n 79092 ']' 00:28:35.750 07:47:39 -- nvmf/common.sh@478 -- # killprocess 79092 00:28:35.750 07:47:39 -- common/autotest_common.sh@926 -- # '[' -z 79092 ']' 00:28:35.750 07:47:39 -- common/autotest_common.sh@930 -- # kill -0 79092 00:28:35.750 07:47:39 -- common/autotest_common.sh@931 -- # uname 00:28:35.750 07:47:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:35.750 07:47:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79092 00:28:35.750 07:47:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:35.750 07:47:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:35.750 07:47:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79092' 00:28:35.750 killing process with pid 79092 00:28:35.750 07:47:39 -- common/autotest_common.sh@945 -- # kill 79092 00:28:35.750 07:47:39 -- common/autotest_common.sh@950 -- # wait 79092 00:28:36.009 07:47:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:36.009 07:47:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:36.009 07:47:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:36.009 07:47:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:36.009 07:47:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:36.009 07:47:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.009 07:47:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:36.009 07:47:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.546 07:47:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:38.546 00:28:38.546 real 0m20.256s 00:28:38.546 user 0m27.534s 00:28:38.546 sys 0m5.205s 00:28:38.546 07:47:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:38.546 07:47:41 -- common/autotest_common.sh@10 -- # set +x 00:28:38.546 ************************************ 00:28:38.546 END TEST nvmf_discovery 00:28:38.546 ************************************ 00:28:38.546 07:47:41 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:38.546 07:47:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:38.546 07:47:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:38.546 07:47:41 -- common/autotest_common.sh@10 -- # set +x 00:28:38.546 ************************************ 00:28:38.546 START TEST nvmf_discovery_remove_ifc 00:28:38.546 ************************************ 00:28:38.546 07:47:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:38.546 * Looking for test storage... 00:28:38.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:38.546 07:47:42 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:38.546 07:47:42 -- nvmf/common.sh@7 -- # uname -s 00:28:38.546 07:47:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.546 07:47:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.546 07:47:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.546 07:47:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.546 07:47:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.546 07:47:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.546 07:47:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.546 07:47:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.546 07:47:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.546 07:47:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.546 07:47:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:38.546 07:47:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:38.546 07:47:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.546 07:47:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.546 07:47:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:38.546 07:47:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.546 07:47:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.546 07:47:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.546 07:47:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.546 07:47:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.546 07:47:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.546 07:47:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.546 07:47:42 -- paths/export.sh@5 -- # export PATH 00:28:38.546 07:47:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.546 07:47:42 -- nvmf/common.sh@46 -- # : 0 00:28:38.546 07:47:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:38.546 07:47:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:38.546 07:47:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:38.546 07:47:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.546 07:47:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.546 07:47:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:38.546 07:47:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:38.546 07:47:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:38.546 07:47:42 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:38.546 07:47:42 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:38.546 07:47:42 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:38.546 07:47:42 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:38.546 07:47:42 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:38.546 07:47:42 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:38.546 07:47:42 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:38.546 07:47:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:38.546 07:47:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.546 07:47:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:38.546 07:47:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:38.546 07:47:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:38.546 07:47:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.546 07:47:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:38.546 07:47:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.546 07:47:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:38.546 07:47:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:38.546 07:47:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:38.546 07:47:42 -- common/autotest_common.sh@10 -- # set +x 00:28:43.821 07:47:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:43.821 07:47:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:43.821 07:47:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:43.821 07:47:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:43.821 07:47:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:43.821 07:47:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:43.821 07:47:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:43.821 07:47:47 -- nvmf/common.sh@294 -- # net_devs=() 00:28:43.821 07:47:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:43.821 07:47:47 -- nvmf/common.sh@295 -- # e810=() 00:28:43.821 07:47:47 -- nvmf/common.sh@295 -- # local -ga e810 00:28:43.821 07:47:47 -- nvmf/common.sh@296 -- # x722=() 00:28:43.821 07:47:47 -- nvmf/common.sh@296 -- # local -ga x722 00:28:43.821 07:47:47 -- nvmf/common.sh@297 -- # mlx=() 00:28:43.821 07:47:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:43.821 07:47:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.821 07:47:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.821 07:47:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.821 07:47:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.821 07:47:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.821 07:47:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.821 07:47:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.821 07:47:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.821 07:47:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.821 07:47:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.821 07:47:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.821 07:47:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:43.821 07:47:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:43.821 07:47:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:43.821 07:47:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:43.821 07:47:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:43.821 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:43.821 07:47:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:43.821 07:47:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:43.821 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:43.821 07:47:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:43.821 07:47:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:43.821 07:47:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.821 07:47:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:43.821 07:47:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.821 07:47:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:43.821 Found net devices under 0000:af:00.0: cvl_0_0 00:28:43.821 07:47:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.821 07:47:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:43.821 07:47:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.821 07:47:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:43.821 07:47:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.821 07:47:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:43.821 Found net devices under 0000:af:00.1: cvl_0_1 00:28:43.821 07:47:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.821 07:47:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:43.821 07:47:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:43.821 07:47:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:43.821 07:47:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:43.821 07:47:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.821 07:47:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.821 07:47:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.821 07:47:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:43.821 07:47:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.821 07:47:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.821 07:47:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:43.821 07:47:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.821 07:47:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.821 07:47:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:43.821 07:47:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:43.821 07:47:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.821 07:47:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.821 07:47:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.821 07:47:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.821 07:47:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:43.821 07:47:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.821 07:47:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.821 07:47:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.821 07:47:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:43.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:28:43.821 00:28:43.821 --- 10.0.0.2 ping statistics --- 00:28:43.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.821 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:28:43.821 07:47:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:28:43.821 00:28:43.821 --- 10.0.0.1 ping statistics --- 00:28:43.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.821 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:28:43.821 07:47:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.821 07:47:47 -- nvmf/common.sh@410 -- # return 0 00:28:43.821 07:47:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:43.821 07:47:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.821 07:47:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:43.822 07:47:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:43.822 07:47:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.822 07:47:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:43.822 07:47:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:43.822 07:47:47 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:43.822 07:47:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:43.822 07:47:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:43.822 07:47:47 -- common/autotest_common.sh@10 -- # set +x 00:28:43.822 07:47:47 -- nvmf/common.sh@469 -- # nvmfpid=84673 00:28:43.822 07:47:47 -- nvmf/common.sh@470 -- # waitforlisten 84673 00:28:43.822 07:47:47 -- common/autotest_common.sh@819 -- # '[' -z 84673 ']' 00:28:43.822 07:47:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.822 07:47:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:43.822 07:47:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.822 07:47:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:43.822 07:47:47 -- common/autotest_common.sh@10 -- # set +x 00:28:43.822 07:47:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:43.822 [2024-10-07 07:47:47.340934] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:43.822 [2024-10-07 07:47:47.340975] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.822 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.822 [2024-10-07 07:47:47.397892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.822 [2024-10-07 07:47:47.474935] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:43.822 [2024-10-07 07:47:47.475038] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.822 [2024-10-07 07:47:47.475046] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.822 [2024-10-07 07:47:47.475053] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.822 [2024-10-07 07:47:47.475074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.389 07:47:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:44.389 07:47:48 -- common/autotest_common.sh@852 -- # return 0 00:28:44.389 07:47:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:44.389 07:47:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:44.389 07:47:48 -- common/autotest_common.sh@10 -- # set +x 00:28:44.389 07:47:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:44.389 07:47:48 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:44.389 07:47:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.389 07:47:48 -- common/autotest_common.sh@10 -- # set +x 00:28:44.389 [2024-10-07 07:47:48.184044] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.389 [2024-10-07 07:47:48.192220] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:44.389 null0 00:28:44.389 [2024-10-07 07:47:48.224207] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.389 07:47:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.389 07:47:48 -- host/discovery_remove_ifc.sh@59 -- # hostpid=84911 00:28:44.389 07:47:48 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 84911 /tmp/host.sock 00:28:44.389 07:47:48 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:44.389 07:47:48 -- common/autotest_common.sh@819 -- # '[' -z 84911 ']' 00:28:44.389 07:47:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:44.389 07:47:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:44.389 07:47:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:44.389 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:44.389 07:47:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:44.389 07:47:48 -- common/autotest_common.sh@10 -- # set +x 00:28:44.389 [2024-10-07 07:47:48.287429] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:44.389 [2024-10-07 07:47:48.287470] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84911 ] 00:28:44.389 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.389 [2024-10-07 07:47:48.343560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.649 [2024-10-07 07:47:48.419761] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:44.649 [2024-10-07 07:47:48.419894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.216 07:47:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:45.216 07:47:49 -- common/autotest_common.sh@852 -- # return 0 00:28:45.216 07:47:49 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:45.216 07:47:49 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:45.216 07:47:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.216 07:47:49 -- common/autotest_common.sh@10 -- # set +x 00:28:45.216 07:47:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.216 07:47:49 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:45.216 07:47:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.216 07:47:49 -- common/autotest_common.sh@10 -- # set +x 00:28:45.216 07:47:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.216 07:47:49 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:45.216 07:47:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.216 07:47:49 -- common/autotest_common.sh@10 -- # set +x 00:28:46.595 [2024-10-07 07:47:50.202017] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:46.595 [2024-10-07 07:47:50.202046] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:46.595 [2024-10-07 07:47:50.202063] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:46.595 [2024-10-07 07:47:50.333449] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:46.595 [2024-10-07 07:47:50.392217] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:46.595 [2024-10-07 07:47:50.392252] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:46.595 [2024-10-07 07:47:50.392273] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:46.595 [2024-10-07 07:47:50.392286] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:46.595 [2024-10-07 07:47:50.392306] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:46.595 07:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:46.595 07:47:50 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:46.595 07:47:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:46.595 07:47:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:46.595 07:47:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:46.595 07:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:46.595 07:47:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:46.595 07:47:50 -- common/autotest_common.sh@10 -- # set +x 00:28:46.595 07:47:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:46.595 [2024-10-07 07:47:50.401231] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c27590 was disconnected and freed. delete nvme_qpair. 00:28:46.595 07:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:46.595 07:47:50 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:46.595 07:47:50 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:46.595 07:47:50 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:46.595 07:47:50 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:46.595 07:47:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:46.595 07:47:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:46.595 07:47:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:46.595 07:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:46.595 07:47:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:46.595 07:47:50 -- common/autotest_common.sh@10 -- # set +x 00:28:46.595 07:47:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:46.854 07:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:46.854 07:47:50 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:46.854 07:47:50 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:47.791 07:47:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:47.791 07:47:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:47.791 07:47:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:47.791 07:47:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:47.791 07:47:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.791 07:47:51 -- common/autotest_common.sh@10 -- # set +x 00:28:47.791 07:47:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:47.791 07:47:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.791 07:47:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:47.791 07:47:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:48.729 07:47:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:48.729 07:47:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:48.729 07:47:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:48.729 07:47:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:48.729 07:47:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:48.729 07:47:52 -- common/autotest_common.sh@10 -- # set +x 00:28:48.729 07:47:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:48.729 07:47:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:48.729 07:47:52 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:48.729 07:47:52 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:50.107 07:47:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:50.107 07:47:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:50.107 07:47:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:50.107 07:47:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:50.107 07:47:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.107 07:47:53 -- common/autotest_common.sh@10 -- # set +x 00:28:50.107 07:47:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:50.107 07:47:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.107 07:47:53 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:50.107 07:47:53 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:51.044 07:47:54 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:51.045 07:47:54 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:51.045 07:47:54 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:51.045 07:47:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.045 07:47:54 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:51.045 07:47:54 -- common/autotest_common.sh@10 -- # set +x 00:28:51.045 07:47:54 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:51.045 07:47:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.045 07:47:54 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:51.045 07:47:54 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:51.982 07:47:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:51.982 07:47:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:51.982 07:47:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:51.982 07:47:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.982 07:47:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:51.982 07:47:55 -- common/autotest_common.sh@10 -- # set +x 00:28:51.982 07:47:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:51.982 07:47:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.982 [2024-10-07 07:47:55.833586] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:51.982 [2024-10-07 07:47:55.833620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.982 [2024-10-07 07:47:55.833631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.982 [2024-10-07 07:47:55.833656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.982 [2024-10-07 07:47:55.833663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.982 [2024-10-07 07:47:55.833670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.982 [2024-10-07 07:47:55.833677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.982 [2024-10-07 07:47:55.833684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.982 [2024-10-07 07:47:55.833691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.982 [2024-10-07 07:47:55.833698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.982 [2024-10-07 07:47:55.833705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.982 [2024-10-07 07:47:55.833712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bee900 is same with the state(5) to be set 00:28:51.982 [2024-10-07 07:47:55.843608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bee900 (9): Bad file descriptor 00:28:51.982 07:47:55 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:51.982 07:47:55 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:51.982 [2024-10-07 07:47:55.853649] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:52.919 07:47:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:52.919 07:47:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:52.919 07:47:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:52.919 07:47:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.919 07:47:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:52.919 07:47:56 -- common/autotest_common.sh@10 -- # set +x 00:28:52.919 07:47:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:53.177 [2024-10-07 07:47:56.918082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:54.114 [2024-10-07 07:47:57.942087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:54.114 [2024-10-07 07:47:57.942132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bee900 with addr=10.0.0.2, port=4420 00:28:54.114 [2024-10-07 07:47:57.942149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bee900 is same with the state(5) to be set 00:28:54.114 [2024-10-07 07:47:57.942173] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:54.114 [2024-10-07 07:47:57.942184] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:54.114 [2024-10-07 07:47:57.942194] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:54.114 [2024-10-07 07:47:57.942211] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:28:54.114 [2024-10-07 07:47:57.942595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bee900 (9): Bad file descriptor 00:28:54.114 [2024-10-07 07:47:57.942621] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.114 [2024-10-07 07:47:57.942646] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:54.114 [2024-10-07 07:47:57.942669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.114 [2024-10-07 07:47:57.942683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.114 [2024-10-07 07:47:57.942698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.114 [2024-10-07 07:47:57.942710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.114 [2024-10-07 07:47:57.942722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.114 [2024-10-07 07:47:57.942733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.114 [2024-10-07 07:47:57.942744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.114 [2024-10-07 07:47:57.942754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.114 [2024-10-07 07:47:57.942765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.114 [2024-10-07 07:47:57.942776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.114 [2024-10-07 07:47:57.942786] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:54.114 [2024-10-07 07:47:57.943177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1beddf0 (9): Bad file descriptor 00:28:54.114 [2024-10-07 07:47:57.944191] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:54.114 [2024-10-07 07:47:57.944208] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:54.114 07:47:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.115 07:47:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:54.115 07:47:57 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:55.051 07:47:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:55.051 07:47:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:55.051 07:47:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:55.051 07:47:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.051 07:47:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:55.051 07:47:58 -- common/autotest_common.sh@10 -- # set +x 00:28:55.051 07:47:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:55.051 07:47:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.051 07:47:59 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:55.051 07:47:59 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.051 07:47:59 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.310 07:47:59 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:55.310 07:47:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:55.310 07:47:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:55.310 07:47:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:55.310 07:47:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.310 07:47:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:55.310 07:47:59 -- common/autotest_common.sh@10 -- # set +x 00:28:55.310 07:47:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:55.310 07:47:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.310 07:47:59 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:55.310 07:47:59 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:56.247 [2024-10-07 07:47:59.956803] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:56.247 [2024-10-07 07:47:59.956818] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:56.247 [2024-10-07 07:47:59.956830] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:56.247 [2024-10-07 07:48:00.045112] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:56.247 07:48:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:56.247 07:48:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:56.247 07:48:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:56.247 07:48:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:56.247 07:48:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:56.247 07:48:00 -- common/autotest_common.sh@10 -- # set +x 00:28:56.247 07:48:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:56.247 07:48:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.247 07:48:00 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:56.247 07:48:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:56.506 [2024-10-07 07:48:00.270826] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:56.506 [2024-10-07 07:48:00.270859] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:56.506 [2024-10-07 07:48:00.270876] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:56.506 [2024-10-07 07:48:00.270889] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:56.506 [2024-10-07 07:48:00.270896] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:56.506 [2024-10-07 07:48:00.317751] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1bfc6a0 was disconnected and freed. delete nvme_qpair. 00:28:57.443 07:48:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:57.443 07:48:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:57.443 07:48:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:57.443 07:48:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:57.443 07:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:57.443 07:48:01 -- common/autotest_common.sh@10 -- # set +x 00:28:57.443 07:48:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:57.443 07:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:57.443 07:48:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:57.443 07:48:01 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:57.443 07:48:01 -- host/discovery_remove_ifc.sh@90 -- # killprocess 84911 00:28:57.443 07:48:01 -- common/autotest_common.sh@926 -- # '[' -z 84911 ']' 00:28:57.443 07:48:01 -- common/autotest_common.sh@930 -- # kill -0 84911 00:28:57.443 07:48:01 -- common/autotest_common.sh@931 -- # uname 00:28:57.443 07:48:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:57.443 07:48:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84911 00:28:57.443 07:48:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:57.443 07:48:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:57.443 07:48:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84911' 00:28:57.443 killing process with pid 84911 00:28:57.443 07:48:01 -- common/autotest_common.sh@945 -- # kill 84911 00:28:57.443 07:48:01 -- common/autotest_common.sh@950 -- # wait 84911 00:28:57.702 07:48:01 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:57.702 07:48:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:57.702 07:48:01 -- nvmf/common.sh@116 -- # sync 00:28:57.702 07:48:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:57.702 07:48:01 -- nvmf/common.sh@119 -- # set +e 00:28:57.702 07:48:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:57.702 07:48:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:57.702 rmmod nvme_tcp 00:28:57.702 rmmod nvme_fabrics 00:28:57.702 rmmod nvme_keyring 00:28:57.702 07:48:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:57.702 07:48:01 -- nvmf/common.sh@123 -- # set -e 00:28:57.702 07:48:01 -- nvmf/common.sh@124 -- # return 0 00:28:57.702 07:48:01 -- nvmf/common.sh@477 -- # '[' -n 84673 ']' 00:28:57.702 07:48:01 -- nvmf/common.sh@478 -- # killprocess 84673 00:28:57.702 07:48:01 -- common/autotest_common.sh@926 -- # '[' -z 84673 ']' 00:28:57.702 07:48:01 -- common/autotest_common.sh@930 -- # kill -0 84673 00:28:57.702 07:48:01 -- common/autotest_common.sh@931 -- # uname 00:28:57.702 07:48:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:57.702 07:48:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84673 00:28:57.702 07:48:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:57.702 07:48:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:57.702 07:48:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84673' 00:28:57.702 killing process with pid 84673 00:28:57.702 07:48:01 -- common/autotest_common.sh@945 -- # kill 84673 00:28:57.702 07:48:01 -- common/autotest_common.sh@950 -- # wait 84673 00:28:57.962 07:48:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:57.962 07:48:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:57.962 07:48:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:57.962 07:48:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:57.962 07:48:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:57.962 07:48:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.962 07:48:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:57.962 07:48:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.499 07:48:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:00.499 00:29:00.499 real 0m21.903s 00:29:00.499 user 0m27.400s 00:29:00.499 sys 0m5.363s 00:29:00.499 07:48:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.499 07:48:03 -- common/autotest_common.sh@10 -- # set +x 00:29:00.499 ************************************ 00:29:00.499 END TEST nvmf_discovery_remove_ifc 00:29:00.499 ************************************ 00:29:00.499 07:48:03 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:29:00.499 07:48:03 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:00.499 07:48:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:00.499 07:48:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:00.499 07:48:03 -- common/autotest_common.sh@10 -- # set +x 00:29:00.499 ************************************ 00:29:00.499 START TEST nvmf_digest 00:29:00.499 ************************************ 00:29:00.499 07:48:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:00.499 * Looking for test storage... 00:29:00.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:00.499 07:48:03 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.499 07:48:03 -- nvmf/common.sh@7 -- # uname -s 00:29:00.499 07:48:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.499 07:48:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.499 07:48:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.499 07:48:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.499 07:48:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.499 07:48:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.499 07:48:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.499 07:48:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.499 07:48:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.499 07:48:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.499 07:48:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:00.499 07:48:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:00.499 07:48:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.499 07:48:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.499 07:48:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.499 07:48:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.499 07:48:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.499 07:48:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.499 07:48:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.499 07:48:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.499 07:48:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.499 07:48:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.499 07:48:04 -- paths/export.sh@5 -- # export PATH 00:29:00.499 07:48:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.499 07:48:04 -- nvmf/common.sh@46 -- # : 0 00:29:00.499 07:48:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:00.499 07:48:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:00.499 07:48:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:00.499 07:48:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.499 07:48:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.499 07:48:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:00.499 07:48:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:00.499 07:48:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:00.499 07:48:04 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:00.499 07:48:04 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:00.499 07:48:04 -- host/digest.sh@16 -- # runtime=2 00:29:00.499 07:48:04 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:29:00.499 07:48:04 -- host/digest.sh@132 -- # nvmftestinit 00:29:00.499 07:48:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:00.499 07:48:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.499 07:48:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:00.499 07:48:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:00.499 07:48:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:00.499 07:48:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.499 07:48:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:00.499 07:48:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.499 07:48:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:00.499 07:48:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:00.499 07:48:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:00.499 07:48:04 -- common/autotest_common.sh@10 -- # set +x 00:29:05.774 07:48:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:05.774 07:48:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:05.774 07:48:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:05.774 07:48:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:05.774 07:48:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:05.774 07:48:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:05.774 07:48:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:05.774 07:48:09 -- nvmf/common.sh@294 -- # net_devs=() 00:29:05.774 07:48:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:05.774 07:48:09 -- nvmf/common.sh@295 -- # e810=() 00:29:05.774 07:48:09 -- nvmf/common.sh@295 -- # local -ga e810 00:29:05.774 07:48:09 -- nvmf/common.sh@296 -- # x722=() 00:29:05.774 07:48:09 -- nvmf/common.sh@296 -- # local -ga x722 00:29:05.774 07:48:09 -- nvmf/common.sh@297 -- # mlx=() 00:29:05.774 07:48:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:05.774 07:48:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.774 07:48:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.774 07:48:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.774 07:48:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.774 07:48:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.774 07:48:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.774 07:48:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.774 07:48:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.774 07:48:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.774 07:48:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.774 07:48:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.774 07:48:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:05.774 07:48:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:05.774 07:48:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:05.774 07:48:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:05.774 07:48:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:05.774 07:48:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:05.774 07:48:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:05.774 07:48:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:05.774 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:05.774 07:48:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:05.774 07:48:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:05.774 07:48:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.774 07:48:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.774 07:48:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:05.774 07:48:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:05.774 07:48:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:05.774 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:05.774 07:48:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:05.774 07:48:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:05.774 07:48:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.774 07:48:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.774 07:48:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:05.774 07:48:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:05.774 07:48:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:05.774 07:48:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:05.774 07:48:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:05.774 07:48:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.774 07:48:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:05.774 07:48:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.774 07:48:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:05.774 Found net devices under 0000:af:00.0: cvl_0_0 00:29:05.774 07:48:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.774 07:48:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:05.774 07:48:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.775 07:48:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:05.775 07:48:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.775 07:48:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:05.775 Found net devices under 0000:af:00.1: cvl_0_1 00:29:05.775 07:48:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.775 07:48:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:05.775 07:48:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:05.775 07:48:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:05.775 07:48:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:05.775 07:48:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:05.775 07:48:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:05.775 07:48:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:05.775 07:48:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.775 07:48:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:05.775 07:48:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:05.775 07:48:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:05.775 07:48:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:05.775 07:48:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:05.775 07:48:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:05.775 07:48:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:05.775 07:48:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:05.775 07:48:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:05.775 07:48:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:05.775 07:48:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:05.775 07:48:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:05.775 07:48:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:05.775 07:48:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:05.775 07:48:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:05.775 07:48:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:05.775 07:48:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:05.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:29:05.775 00:29:05.775 --- 10.0.0.2 ping statistics --- 00:29:05.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.775 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:29:05.775 07:48:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:29:05.775 00:29:05.775 --- 10.0.0.1 ping statistics --- 00:29:05.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.775 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:29:05.775 07:48:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.775 07:48:09 -- nvmf/common.sh@410 -- # return 0 00:29:05.775 07:48:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:05.775 07:48:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.775 07:48:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:05.775 07:48:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:05.775 07:48:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.775 07:48:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:05.775 07:48:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:05.775 07:48:09 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:05.775 07:48:09 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:29:05.775 07:48:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:05.775 07:48:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:05.775 07:48:09 -- common/autotest_common.sh@10 -- # set +x 00:29:05.775 ************************************ 00:29:05.775 START TEST nvmf_digest_clean 00:29:05.775 ************************************ 00:29:05.775 07:48:09 -- common/autotest_common.sh@1104 -- # run_digest 00:29:05.775 07:48:09 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:29:05.775 07:48:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:05.775 07:48:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:05.775 07:48:09 -- common/autotest_common.sh@10 -- # set +x 00:29:05.775 07:48:09 -- nvmf/common.sh@469 -- # nvmfpid=91019 00:29:05.775 07:48:09 -- nvmf/common.sh@470 -- # waitforlisten 91019 00:29:05.775 07:48:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:05.775 07:48:09 -- common/autotest_common.sh@819 -- # '[' -z 91019 ']' 00:29:05.775 07:48:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.775 07:48:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:05.775 07:48:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.775 07:48:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:05.775 07:48:09 -- common/autotest_common.sh@10 -- # set +x 00:29:05.775 [2024-10-07 07:48:09.583048] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:05.775 [2024-10-07 07:48:09.583114] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.775 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.775 [2024-10-07 07:48:09.643429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.775 [2024-10-07 07:48:09.720916] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:05.775 [2024-10-07 07:48:09.721023] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.775 [2024-10-07 07:48:09.721031] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.775 [2024-10-07 07:48:09.721038] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.775 [2024-10-07 07:48:09.721054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.712 07:48:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:06.712 07:48:10 -- common/autotest_common.sh@852 -- # return 0 00:29:06.712 07:48:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:06.712 07:48:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:06.712 07:48:10 -- common/autotest_common.sh@10 -- # set +x 00:29:06.712 07:48:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.712 07:48:10 -- host/digest.sh@120 -- # common_target_config 00:29:06.712 07:48:10 -- host/digest.sh@43 -- # rpc_cmd 00:29:06.712 07:48:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.712 07:48:10 -- common/autotest_common.sh@10 -- # set +x 00:29:06.712 null0 00:29:06.712 [2024-10-07 07:48:10.516492] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.712 [2024-10-07 07:48:10.540706] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.712 07:48:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.712 07:48:10 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:29:06.712 07:48:10 -- host/digest.sh@77 -- # local rw bs qd 00:29:06.712 07:48:10 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:06.712 07:48:10 -- host/digest.sh@80 -- # rw=randread 00:29:06.712 07:48:10 -- host/digest.sh@80 -- # bs=4096 00:29:06.712 07:48:10 -- host/digest.sh@80 -- # qd=128 00:29:06.712 07:48:10 -- host/digest.sh@82 -- # bperfpid=91151 00:29:06.712 07:48:10 -- host/digest.sh@83 -- # waitforlisten 91151 /var/tmp/bperf.sock 00:29:06.712 07:48:10 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:06.712 07:48:10 -- common/autotest_common.sh@819 -- # '[' -z 91151 ']' 00:29:06.712 07:48:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:06.712 07:48:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:06.712 07:48:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:06.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:06.712 07:48:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:06.712 07:48:10 -- common/autotest_common.sh@10 -- # set +x 00:29:06.712 [2024-10-07 07:48:10.587173] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:06.712 [2024-10-07 07:48:10.587215] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91151 ] 00:29:06.712 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.712 [2024-10-07 07:48:10.642408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.972 [2024-10-07 07:48:10.716904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.541 07:48:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:07.541 07:48:11 -- common/autotest_common.sh@852 -- # return 0 00:29:07.541 07:48:11 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:07.541 07:48:11 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:07.541 07:48:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:07.800 07:48:11 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.800 07:48:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.059 nvme0n1 00:29:08.059 07:48:12 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:08.059 07:48:12 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:08.317 Running I/O for 2 seconds... 00:29:10.223 00:29:10.223 Latency(us) 00:29:10.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.223 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:10.223 nvme0n1 : 2.00 29710.59 116.06 0.00 0.00 4303.77 1825.65 12483.05 00:29:10.223 =================================================================================================================== 00:29:10.223 Total : 29710.59 116.06 0.00 0.00 4303.77 1825.65 12483.05 00:29:10.223 0 00:29:10.223 07:48:14 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:10.223 07:48:14 -- host/digest.sh@92 -- # get_accel_stats 00:29:10.223 07:48:14 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:10.223 07:48:14 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:10.223 | select(.opcode=="crc32c") 00:29:10.223 | "\(.module_name) \(.executed)"' 00:29:10.223 07:48:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:10.483 07:48:14 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:10.483 07:48:14 -- host/digest.sh@93 -- # exp_module=software 00:29:10.483 07:48:14 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:10.483 07:48:14 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:10.483 07:48:14 -- host/digest.sh@97 -- # killprocess 91151 00:29:10.483 07:48:14 -- common/autotest_common.sh@926 -- # '[' -z 91151 ']' 00:29:10.483 07:48:14 -- common/autotest_common.sh@930 -- # kill -0 91151 00:29:10.483 07:48:14 -- common/autotest_common.sh@931 -- # uname 00:29:10.483 07:48:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:10.483 07:48:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91151 00:29:10.483 07:48:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:10.483 07:48:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:10.483 07:48:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91151' 00:29:10.483 killing process with pid 91151 00:29:10.483 07:48:14 -- common/autotest_common.sh@945 -- # kill 91151 00:29:10.483 Received shutdown signal, test time was about 2.000000 seconds 00:29:10.483 00:29:10.483 Latency(us) 00:29:10.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.483 =================================================================================================================== 00:29:10.483 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.483 07:48:14 -- common/autotest_common.sh@950 -- # wait 91151 00:29:10.742 07:48:14 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:29:10.742 07:48:14 -- host/digest.sh@77 -- # local rw bs qd 00:29:10.742 07:48:14 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:10.742 07:48:14 -- host/digest.sh@80 -- # rw=randread 00:29:10.742 07:48:14 -- host/digest.sh@80 -- # bs=131072 00:29:10.742 07:48:14 -- host/digest.sh@80 -- # qd=16 00:29:10.743 07:48:14 -- host/digest.sh@82 -- # bperfpid=91779 00:29:10.743 07:48:14 -- host/digest.sh@83 -- # waitforlisten 91779 /var/tmp/bperf.sock 00:29:10.743 07:48:14 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:10.743 07:48:14 -- common/autotest_common.sh@819 -- # '[' -z 91779 ']' 00:29:10.743 07:48:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:10.743 07:48:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:10.743 07:48:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:10.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:10.743 07:48:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:10.743 07:48:14 -- common/autotest_common.sh@10 -- # set +x 00:29:10.743 [2024-10-07 07:48:14.624497] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:10.743 [2024-10-07 07:48:14.624543] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91779 ] 00:29:10.743 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:10.743 Zero copy mechanism will not be used. 00:29:10.743 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.743 [2024-10-07 07:48:14.678100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.001 [2024-10-07 07:48:14.753116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.568 07:48:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:11.568 07:48:15 -- common/autotest_common.sh@852 -- # return 0 00:29:11.568 07:48:15 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:11.568 07:48:15 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:11.568 07:48:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:11.827 07:48:15 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.827 07:48:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.086 nvme0n1 00:29:12.087 07:48:15 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:12.087 07:48:15 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:12.087 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:12.087 Zero copy mechanism will not be used. 00:29:12.087 Running I/O for 2 seconds... 00:29:14.628 00:29:14.628 Latency(us) 00:29:14.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.628 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:14.628 nvme0n1 : 2.00 4844.06 605.51 0.00 0.00 3300.61 542.23 7833.11 00:29:14.628 =================================================================================================================== 00:29:14.628 Total : 4844.06 605.51 0.00 0.00 3300.61 542.23 7833.11 00:29:14.628 0 00:29:14.628 07:48:18 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:14.628 07:48:18 -- host/digest.sh@92 -- # get_accel_stats 00:29:14.628 07:48:18 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:14.628 07:48:18 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:14.628 | select(.opcode=="crc32c") 00:29:14.628 | "\(.module_name) \(.executed)"' 00:29:14.628 07:48:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:14.628 07:48:18 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:14.628 07:48:18 -- host/digest.sh@93 -- # exp_module=software 00:29:14.628 07:48:18 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:14.628 07:48:18 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:14.628 07:48:18 -- host/digest.sh@97 -- # killprocess 91779 00:29:14.628 07:48:18 -- common/autotest_common.sh@926 -- # '[' -z 91779 ']' 00:29:14.628 07:48:18 -- common/autotest_common.sh@930 -- # kill -0 91779 00:29:14.628 07:48:18 -- common/autotest_common.sh@931 -- # uname 00:29:14.628 07:48:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:14.628 07:48:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91779 00:29:14.628 07:48:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:14.628 07:48:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:14.628 07:48:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91779' 00:29:14.628 killing process with pid 91779 00:29:14.628 07:48:18 -- common/autotest_common.sh@945 -- # kill 91779 00:29:14.628 Received shutdown signal, test time was about 2.000000 seconds 00:29:14.628 00:29:14.628 Latency(us) 00:29:14.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.628 =================================================================================================================== 00:29:14.628 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.628 07:48:18 -- common/autotest_common.sh@950 -- # wait 91779 00:29:14.628 07:48:18 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:29:14.628 07:48:18 -- host/digest.sh@77 -- # local rw bs qd 00:29:14.628 07:48:18 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:14.628 07:48:18 -- host/digest.sh@80 -- # rw=randwrite 00:29:14.628 07:48:18 -- host/digest.sh@80 -- # bs=4096 00:29:14.628 07:48:18 -- host/digest.sh@80 -- # qd=128 00:29:14.628 07:48:18 -- host/digest.sh@82 -- # bperfpid=92465 00:29:14.628 07:48:18 -- host/digest.sh@83 -- # waitforlisten 92465 /var/tmp/bperf.sock 00:29:14.628 07:48:18 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:14.628 07:48:18 -- common/autotest_common.sh@819 -- # '[' -z 92465 ']' 00:29:14.628 07:48:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:14.628 07:48:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:14.628 07:48:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:14.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:14.628 07:48:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:14.628 07:48:18 -- common/autotest_common.sh@10 -- # set +x 00:29:14.628 [2024-10-07 07:48:18.525076] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:14.628 [2024-10-07 07:48:18.525125] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92465 ] 00:29:14.628 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.628 [2024-10-07 07:48:18.580132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.890 [2024-10-07 07:48:18.648093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.458 07:48:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:15.458 07:48:19 -- common/autotest_common.sh@852 -- # return 0 00:29:15.458 07:48:19 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:15.458 07:48:19 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:15.458 07:48:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:15.718 07:48:19 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.718 07:48:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.977 nvme0n1 00:29:15.977 07:48:19 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:15.977 07:48:19 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:15.977 Running I/O for 2 seconds... 00:29:18.014 00:29:18.014 Latency(us) 00:29:18.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.014 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:18.014 nvme0n1 : 2.00 29704.95 116.03 0.00 0.00 4303.91 1934.87 7895.53 00:29:18.014 =================================================================================================================== 00:29:18.014 Total : 29704.95 116.03 0.00 0.00 4303.91 1934.87 7895.53 00:29:18.014 0 00:29:18.014 07:48:21 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:18.014 07:48:21 -- host/digest.sh@92 -- # get_accel_stats 00:29:18.014 07:48:21 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:18.014 07:48:21 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:18.014 | select(.opcode=="crc32c") 00:29:18.014 | "\(.module_name) \(.executed)"' 00:29:18.014 07:48:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:18.273 07:48:22 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:18.273 07:48:22 -- host/digest.sh@93 -- # exp_module=software 00:29:18.273 07:48:22 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:18.273 07:48:22 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:18.273 07:48:22 -- host/digest.sh@97 -- # killprocess 92465 00:29:18.273 07:48:22 -- common/autotest_common.sh@926 -- # '[' -z 92465 ']' 00:29:18.273 07:48:22 -- common/autotest_common.sh@930 -- # kill -0 92465 00:29:18.273 07:48:22 -- common/autotest_common.sh@931 -- # uname 00:29:18.273 07:48:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:18.273 07:48:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92465 00:29:18.273 07:48:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:18.273 07:48:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:18.273 07:48:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92465' 00:29:18.273 killing process with pid 92465 00:29:18.273 07:48:22 -- common/autotest_common.sh@945 -- # kill 92465 00:29:18.273 Received shutdown signal, test time was about 2.000000 seconds 00:29:18.273 00:29:18.273 Latency(us) 00:29:18.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.273 =================================================================================================================== 00:29:18.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.273 07:48:22 -- common/autotest_common.sh@950 -- # wait 92465 00:29:18.532 07:48:22 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:29:18.532 07:48:22 -- host/digest.sh@77 -- # local rw bs qd 00:29:18.532 07:48:22 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:18.532 07:48:22 -- host/digest.sh@80 -- # rw=randwrite 00:29:18.532 07:48:22 -- host/digest.sh@80 -- # bs=131072 00:29:18.532 07:48:22 -- host/digest.sh@80 -- # qd=16 00:29:18.532 07:48:22 -- host/digest.sh@82 -- # bperfpid=93156 00:29:18.532 07:48:22 -- host/digest.sh@83 -- # waitforlisten 93156 /var/tmp/bperf.sock 00:29:18.532 07:48:22 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:18.532 07:48:22 -- common/autotest_common.sh@819 -- # '[' -z 93156 ']' 00:29:18.532 07:48:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:18.532 07:48:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:18.532 07:48:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:18.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:18.532 07:48:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:18.532 07:48:22 -- common/autotest_common.sh@10 -- # set +x 00:29:18.532 [2024-10-07 07:48:22.405648] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:18.532 [2024-10-07 07:48:22.405696] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93156 ] 00:29:18.532 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:18.532 Zero copy mechanism will not be used. 00:29:18.532 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.532 [2024-10-07 07:48:22.458859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.790 [2024-10-07 07:48:22.527576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.356 07:48:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:19.356 07:48:23 -- common/autotest_common.sh@852 -- # return 0 00:29:19.356 07:48:23 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:19.356 07:48:23 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:19.356 07:48:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:19.614 07:48:23 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.614 07:48:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.872 nvme0n1 00:29:19.872 07:48:23 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:19.872 07:48:23 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:19.872 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:19.872 Zero copy mechanism will not be used. 00:29:19.873 Running I/O for 2 seconds... 00:29:22.401 00:29:22.402 Latency(us) 00:29:22.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.402 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:22.402 nvme0n1 : 2.00 6747.99 843.50 0.00 0.00 2367.36 1607.19 16103.13 00:29:22.402 =================================================================================================================== 00:29:22.402 Total : 6747.99 843.50 0.00 0.00 2367.36 1607.19 16103.13 00:29:22.402 0 00:29:22.402 07:48:25 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:22.402 07:48:25 -- host/digest.sh@92 -- # get_accel_stats 00:29:22.402 07:48:25 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:22.402 07:48:25 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:22.402 | select(.opcode=="crc32c") 00:29:22.402 | "\(.module_name) \(.executed)"' 00:29:22.402 07:48:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:22.402 07:48:26 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:22.402 07:48:26 -- host/digest.sh@93 -- # exp_module=software 00:29:22.402 07:48:26 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:22.402 07:48:26 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:22.402 07:48:26 -- host/digest.sh@97 -- # killprocess 93156 00:29:22.402 07:48:26 -- common/autotest_common.sh@926 -- # '[' -z 93156 ']' 00:29:22.402 07:48:26 -- common/autotest_common.sh@930 -- # kill -0 93156 00:29:22.402 07:48:26 -- common/autotest_common.sh@931 -- # uname 00:29:22.402 07:48:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:22.402 07:48:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93156 00:29:22.402 07:48:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:22.402 07:48:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:22.402 07:48:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93156' 00:29:22.402 killing process with pid 93156 00:29:22.402 07:48:26 -- common/autotest_common.sh@945 -- # kill 93156 00:29:22.402 Received shutdown signal, test time was about 2.000000 seconds 00:29:22.402 00:29:22.402 Latency(us) 00:29:22.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.402 =================================================================================================================== 00:29:22.402 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:22.402 07:48:26 -- common/autotest_common.sh@950 -- # wait 93156 00:29:22.402 07:48:26 -- host/digest.sh@126 -- # killprocess 91019 00:29:22.402 07:48:26 -- common/autotest_common.sh@926 -- # '[' -z 91019 ']' 00:29:22.402 07:48:26 -- common/autotest_common.sh@930 -- # kill -0 91019 00:29:22.402 07:48:26 -- common/autotest_common.sh@931 -- # uname 00:29:22.402 07:48:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:22.402 07:48:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91019 00:29:22.402 07:48:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:22.402 07:48:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:22.402 07:48:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91019' 00:29:22.402 killing process with pid 91019 00:29:22.402 07:48:26 -- common/autotest_common.sh@945 -- # kill 91019 00:29:22.402 07:48:26 -- common/autotest_common.sh@950 -- # wait 91019 00:29:22.660 00:29:22.660 real 0m17.026s 00:29:22.660 user 0m32.576s 00:29:22.660 sys 0m4.639s 00:29:22.660 07:48:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:22.660 07:48:26 -- common/autotest_common.sh@10 -- # set +x 00:29:22.660 ************************************ 00:29:22.660 END TEST nvmf_digest_clean 00:29:22.660 ************************************ 00:29:22.660 07:48:26 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:29:22.660 07:48:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:22.660 07:48:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:22.660 07:48:26 -- common/autotest_common.sh@10 -- # set +x 00:29:22.660 ************************************ 00:29:22.660 START TEST nvmf_digest_error 00:29:22.660 ************************************ 00:29:22.660 07:48:26 -- common/autotest_common.sh@1104 -- # run_digest_error 00:29:22.660 07:48:26 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:29:22.660 07:48:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:22.660 07:48:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:22.660 07:48:26 -- common/autotest_common.sh@10 -- # set +x 00:29:22.660 07:48:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:22.660 07:48:26 -- nvmf/common.sh@469 -- # nvmfpid=93877 00:29:22.660 07:48:26 -- nvmf/common.sh@470 -- # waitforlisten 93877 00:29:22.660 07:48:26 -- common/autotest_common.sh@819 -- # '[' -z 93877 ']' 00:29:22.660 07:48:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.660 07:48:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:22.660 07:48:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.660 07:48:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:22.660 07:48:26 -- common/autotest_common.sh@10 -- # set +x 00:29:22.918 [2024-10-07 07:48:26.633151] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:22.918 [2024-10-07 07:48:26.633195] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.918 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.918 [2024-10-07 07:48:26.691347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.918 [2024-10-07 07:48:26.765367] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:22.918 [2024-10-07 07:48:26.765491] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.918 [2024-10-07 07:48:26.765499] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.918 [2024-10-07 07:48:26.765506] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.918 [2024-10-07 07:48:26.765522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.918 07:48:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:22.918 07:48:26 -- common/autotest_common.sh@852 -- # return 0 00:29:22.918 07:48:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:22.918 07:48:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:22.918 07:48:26 -- common/autotest_common.sh@10 -- # set +x 00:29:22.918 07:48:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.918 07:48:26 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:22.918 07:48:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.918 07:48:26 -- common/autotest_common.sh@10 -- # set +x 00:29:22.918 [2024-10-07 07:48:26.829944] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:22.918 07:48:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.918 07:48:26 -- host/digest.sh@104 -- # common_target_config 00:29:22.918 07:48:26 -- host/digest.sh@43 -- # rpc_cmd 00:29:22.918 07:48:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.918 07:48:26 -- common/autotest_common.sh@10 -- # set +x 00:29:23.176 null0 00:29:23.176 [2024-10-07 07:48:26.922801] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.176 [2024-10-07 07:48:26.947000] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.176 07:48:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.176 07:48:26 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:29:23.176 07:48:26 -- host/digest.sh@54 -- # local rw bs qd 00:29:23.176 07:48:26 -- host/digest.sh@56 -- # rw=randread 00:29:23.176 07:48:26 -- host/digest.sh@56 -- # bs=4096 00:29:23.176 07:48:26 -- host/digest.sh@56 -- # qd=128 00:29:23.176 07:48:26 -- host/digest.sh@58 -- # bperfpid=93898 00:29:23.176 07:48:26 -- host/digest.sh@60 -- # waitforlisten 93898 /var/tmp/bperf.sock 00:29:23.176 07:48:26 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:23.176 07:48:26 -- common/autotest_common.sh@819 -- # '[' -z 93898 ']' 00:29:23.176 07:48:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:23.176 07:48:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:23.176 07:48:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:23.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:23.176 07:48:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:23.176 07:48:26 -- common/autotest_common.sh@10 -- # set +x 00:29:23.176 [2024-10-07 07:48:26.994148] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:23.176 [2024-10-07 07:48:26.994191] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93898 ] 00:29:23.176 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.176 [2024-10-07 07:48:27.047617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.176 [2024-10-07 07:48:27.123044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.106 07:48:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:24.106 07:48:27 -- common/autotest_common.sh@852 -- # return 0 00:29:24.106 07:48:27 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:24.106 07:48:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:24.106 07:48:27 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:24.106 07:48:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.106 07:48:27 -- common/autotest_common.sh@10 -- # set +x 00:29:24.106 07:48:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.106 07:48:27 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:24.106 07:48:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:24.671 nvme0n1 00:29:24.671 07:48:28 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:24.671 07:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.671 07:48:28 -- common/autotest_common.sh@10 -- # set +x 00:29:24.671 07:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.671 07:48:28 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:24.671 07:48:28 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:24.671 Running I/O for 2 seconds... 00:29:24.671 [2024-10-07 07:48:28.520399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.671 [2024-10-07 07:48:28.520437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.671 [2024-10-07 07:48:28.520448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.671 [2024-10-07 07:48:28.532295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.671 [2024-10-07 07:48:28.532332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.671 [2024-10-07 07:48:28.532342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.671 [2024-10-07 07:48:28.542759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.671 [2024-10-07 07:48:28.542781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.671 [2024-10-07 07:48:28.542789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.671 [2024-10-07 07:48:28.553539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.671 [2024-10-07 07:48:28.553561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.671 [2024-10-07 07:48:28.553569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.671 [2024-10-07 07:48:28.563129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.671 [2024-10-07 07:48:28.563149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.671 [2024-10-07 07:48:28.563158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.671 [2024-10-07 07:48:28.571443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.671 [2024-10-07 07:48:28.571463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.671 [2024-10-07 07:48:28.571471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.671 [2024-10-07 07:48:28.580333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.671 [2024-10-07 07:48:28.580352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.671 [2024-10-07 07:48:28.580360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.671 [2024-10-07 07:48:28.588679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.671 [2024-10-07 07:48:28.588700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.672 [2024-10-07 07:48:28.588708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.672 [2024-10-07 07:48:28.596892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.672 [2024-10-07 07:48:28.596912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.672 [2024-10-07 07:48:28.596921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.672 [2024-10-07 07:48:28.605775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.672 [2024-10-07 07:48:28.605795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.672 [2024-10-07 07:48:28.605802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.672 [2024-10-07 07:48:28.614315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.672 [2024-10-07 07:48:28.614334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.672 [2024-10-07 07:48:28.614342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.672 [2024-10-07 07:48:28.622509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.672 [2024-10-07 07:48:28.622529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.672 [2024-10-07 07:48:28.622536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.672 [2024-10-07 07:48:28.630776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.672 [2024-10-07 07:48:28.630796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.672 [2024-10-07 07:48:28.630804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.672 [2024-10-07 07:48:28.639730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.672 [2024-10-07 07:48:28.639750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.672 [2024-10-07 07:48:28.639759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.930 [2024-10-07 07:48:28.648339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.930 [2024-10-07 07:48:28.648359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.930 [2024-10-07 07:48:28.648367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.930 [2024-10-07 07:48:28.656737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.930 [2024-10-07 07:48:28.656757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.930 [2024-10-07 07:48:28.656765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.930 [2024-10-07 07:48:28.664989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.930 [2024-10-07 07:48:28.665008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.930 [2024-10-07 07:48:28.665015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.930 [2024-10-07 07:48:28.674093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.930 [2024-10-07 07:48:28.674116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.930 [2024-10-07 07:48:28.674123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.930 [2024-10-07 07:48:28.682392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.930 [2024-10-07 07:48:28.682411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.682419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.690585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.690604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.690612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.699034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.699053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.699067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.707913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.707933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.707941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.716252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.716271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.716279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.724608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.724627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.724634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.733539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.733558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.733566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.741739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.741758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.741766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.750090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.750109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.750117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.758246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.758266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.758273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.767202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.767222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.767229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.775372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.775391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.775399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.784611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.784630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.784638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.792941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.792960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.792968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.801345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.801365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.801372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.809562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.809580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.809588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.818333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.818353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.818366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.826487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.826506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.826513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.835404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.835424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.835431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.843765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.843786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.843794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.851964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.851984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.851992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.860319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.860339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.860347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.869157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.869178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.869186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.877521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.877541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.877549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.885921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.885942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.885950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.931 [2024-10-07 07:48:28.894972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:24.931 [2024-10-07 07:48:28.894995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.931 [2024-10-07 07:48:28.895003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.191 [2024-10-07 07:48:28.903648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.191 [2024-10-07 07:48:28.903668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.191 [2024-10-07 07:48:28.903677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.191 [2024-10-07 07:48:28.912079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.191 [2024-10-07 07:48:28.912099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.191 [2024-10-07 07:48:28.912107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.191 [2024-10-07 07:48:28.921236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.191 [2024-10-07 07:48:28.921256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.191 [2024-10-07 07:48:28.921265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.191 [2024-10-07 07:48:28.929831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.191 [2024-10-07 07:48:28.929852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.191 [2024-10-07 07:48:28.929860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:28.941260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:28.941280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:28.941288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:28.952284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:28.952304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:28.952312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:28.959641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:28.959661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:28.959668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:28.968539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:28.968559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:28.968567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:28.977188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:28.977208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:28.977215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:28.985467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:28.985487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:28.985495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:28.993597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:28.993617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:28.993625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.002577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.002598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.002606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.010995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.011015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.011023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.019191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.019210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.019218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.027542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.027562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.027569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.036817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.036838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.036845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.044988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.045012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.045020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.053189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.053209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.053217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.061912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.061932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.061939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.070162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.070182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.070190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.078373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.078392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.078399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.087214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.087234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.087241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.095534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.095553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.095561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.103902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.103921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.103929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.112576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.112597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.112604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.121015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.121034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.121041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.129212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.129232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.129240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.138166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.138186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.138193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.146558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.146578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.192 [2024-10-07 07:48:29.146586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.192 [2024-10-07 07:48:29.154710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.192 [2024-10-07 07:48:29.154730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.193 [2024-10-07 07:48:29.154738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.163334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.163354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.163361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.172319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.172339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.172347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.180511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.180531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.180539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.188691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.188710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.188721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.197821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.197842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.197850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.206068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.206088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.206096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.214423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.214443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.214451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.222858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.222878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.222886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.231806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.231826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.231833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.240220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.240239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.240247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.248414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.248433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.248441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.257199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.257219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.257226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.265516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.265540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.265548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.273767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.273786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.273794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.282802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.282821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.282829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.291210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.291231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.291239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.299786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.299806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.299814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.308387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.308410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.308420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.316662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.316681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.316689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.324849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.324869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.324877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.333787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.333806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.333814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.342110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.342129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.342137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.350359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.350378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.350386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.358644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.358663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.358671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.367466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.367485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.367493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.453 [2024-10-07 07:48:29.375596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.453 [2024-10-07 07:48:29.375614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.453 [2024-10-07 07:48:29.375622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.454 [2024-10-07 07:48:29.384461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.454 [2024-10-07 07:48:29.384480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.454 [2024-10-07 07:48:29.384487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.454 [2024-10-07 07:48:29.392703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.454 [2024-10-07 07:48:29.392722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.454 [2024-10-07 07:48:29.392729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.454 [2024-10-07 07:48:29.400807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.454 [2024-10-07 07:48:29.400826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.454 [2024-10-07 07:48:29.400833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.454 [2024-10-07 07:48:29.409163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.454 [2024-10-07 07:48:29.409182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.454 [2024-10-07 07:48:29.409193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.454 [2024-10-07 07:48:29.417927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.454 [2024-10-07 07:48:29.417947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.454 [2024-10-07 07:48:29.417955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.713 [2024-10-07 07:48:29.426291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.713 [2024-10-07 07:48:29.426311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.713 [2024-10-07 07:48:29.426319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.713 [2024-10-07 07:48:29.434737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.713 [2024-10-07 07:48:29.434756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.713 [2024-10-07 07:48:29.434764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.713 [2024-10-07 07:48:29.444332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.713 [2024-10-07 07:48:29.444352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.713 [2024-10-07 07:48:29.444360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.713 [2024-10-07 07:48:29.452425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.713 [2024-10-07 07:48:29.452445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.713 [2024-10-07 07:48:29.452453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.713 [2024-10-07 07:48:29.460664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.713 [2024-10-07 07:48:29.460683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.713 [2024-10-07 07:48:29.460691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.713 [2024-10-07 07:48:29.468829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.713 [2024-10-07 07:48:29.468848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.713 [2024-10-07 07:48:29.468856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.713 [2024-10-07 07:48:29.477638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.713 [2024-10-07 07:48:29.477657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.713 [2024-10-07 07:48:29.477665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.713 [2024-10-07 07:48:29.485953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.713 [2024-10-07 07:48:29.485972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.713 [2024-10-07 07:48:29.485980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.713 [2024-10-07 07:48:29.494146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.713 [2024-10-07 07:48:29.494165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.713 [2024-10-07 07:48:29.494173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.713 [2024-10-07 07:48:29.502982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.713 [2024-10-07 07:48:29.503002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.713 [2024-10-07 07:48:29.503010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.713 [2024-10-07 07:48:29.511358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.713 [2024-10-07 07:48:29.511377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.713 [2024-10-07 07:48:29.511384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.713 [2024-10-07 07:48:29.519422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.713 [2024-10-07 07:48:29.519441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.713 [2024-10-07 07:48:29.519448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.713 [2024-10-07 07:48:29.528681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.713 [2024-10-07 07:48:29.528701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.528709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.537074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.537094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.537102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.545479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.545498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.545507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.553667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.553686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.553697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.562445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.562464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.562472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.570636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.570656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.570663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.578969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.578989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.578996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.587848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.587869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.587877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.595974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.595994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.596002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.604305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.604324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.604332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.612944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.612963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.612971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.621345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.621365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.621373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.629705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.629729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.629737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.638593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.638613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.638621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.646780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.646799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.646807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.655181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.655201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.655209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.663960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.663982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.663989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.672151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.672171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.672179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.714 [2024-10-07 07:48:29.680590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.714 [2024-10-07 07:48:29.680610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.714 [2024-10-07 07:48:29.680618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.689578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.689597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.689605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.697980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.698000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.698007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.706118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.706137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.706145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.714825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.714843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.714851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.723089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.723108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.723115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.731244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.731263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.731271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.740194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.740214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.740221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.748549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.748568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.748576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.756905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.756924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.756932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.765576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.765595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.765602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.773861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.773880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.773891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.782034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.782053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.782066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.790991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.791011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.791018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.799626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.799655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.799663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.807726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.807745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.807752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.816078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.816097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.816105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.824785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.824804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.824812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.832929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.832948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.832956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.841975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.841993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.842002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.850148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.850170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.850178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.858361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.858379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.858387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.866793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.974 [2024-10-07 07:48:29.866812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.974 [2024-10-07 07:48:29.866820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.974 [2024-10-07 07:48:29.875555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.975 [2024-10-07 07:48:29.875575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.975 [2024-10-07 07:48:29.875583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.975 [2024-10-07 07:48:29.883684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.975 [2024-10-07 07:48:29.883704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.975 [2024-10-07 07:48:29.883712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.975 [2024-10-07 07:48:29.891906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.975 [2024-10-07 07:48:29.891926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.975 [2024-10-07 07:48:29.891933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.975 [2024-10-07 07:48:29.900761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.975 [2024-10-07 07:48:29.900780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.975 [2024-10-07 07:48:29.900788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.975 [2024-10-07 07:48:29.908892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.975 [2024-10-07 07:48:29.908911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.975 [2024-10-07 07:48:29.908919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.975 [2024-10-07 07:48:29.917278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.975 [2024-10-07 07:48:29.917298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.975 [2024-10-07 07:48:29.917311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.975 [2024-10-07 07:48:29.925932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.975 [2024-10-07 07:48:29.925950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.975 [2024-10-07 07:48:29.925958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.975 [2024-10-07 07:48:29.934078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.975 [2024-10-07 07:48:29.934097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.975 [2024-10-07 07:48:29.934104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.975 [2024-10-07 07:48:29.942546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:25.975 [2024-10-07 07:48:29.942565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.975 [2024-10-07 07:48:29.942573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:29.951475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:29.951494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:29.951501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:29.959766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:29.959784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:29.959793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:29.968106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:29.968126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:29.968133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:29.976744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:29.976764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:29.976772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:29.984974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:29.984994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:29.985001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:29.993097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:29.993120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:29.993128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:30.002247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:30.002268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:30.002276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:30.010870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:30.010891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:30.010898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:30.020890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:30.020909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:30.020917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:30.029904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:30.029925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:30.029933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:30.038536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:30.038556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:30.038564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:30.048267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:30.048288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:30.048297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:30.057465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:30.057486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:30.057494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:30.065987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:30.066007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:30.066016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:30.074364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:30.074385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:30.074393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:30.082897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:30.082917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:30.082925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:30.091856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:30.091876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:30.091884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:30.101787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:30.101807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:30.101816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:30.110898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:30.110917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:30.110925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:30.119418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:30.119437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:30.119445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.234 [2024-10-07 07:48:30.127842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.234 [2024-10-07 07:48:30.127862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.234 [2024-10-07 07:48:30.127870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.235 [2024-10-07 07:48:30.136424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.235 [2024-10-07 07:48:30.136444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.235 [2024-10-07 07:48:30.136453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.235 [2024-10-07 07:48:30.145398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.235 [2024-10-07 07:48:30.145418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.235 [2024-10-07 07:48:30.145429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.235 [2024-10-07 07:48:30.153764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.235 [2024-10-07 07:48:30.153784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.235 [2024-10-07 07:48:30.153792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.235 [2024-10-07 07:48:30.162321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.235 [2024-10-07 07:48:30.162340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.235 [2024-10-07 07:48:30.162348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.235 [2024-10-07 07:48:30.171412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.235 [2024-10-07 07:48:30.171431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.235 [2024-10-07 07:48:30.171439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.235 [2024-10-07 07:48:30.179850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.235 [2024-10-07 07:48:30.179869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.235 [2024-10-07 07:48:30.179877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.235 [2024-10-07 07:48:30.188329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.235 [2024-10-07 07:48:30.188348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.235 [2024-10-07 07:48:30.188356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.235 [2024-10-07 07:48:30.197201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.235 [2024-10-07 07:48:30.197221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.235 [2024-10-07 07:48:30.197229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.494 [2024-10-07 07:48:30.205704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.494 [2024-10-07 07:48:30.205724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.494 [2024-10-07 07:48:30.205731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.494 [2024-10-07 07:48:30.214293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.494 [2024-10-07 07:48:30.214312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.494 [2024-10-07 07:48:30.214320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.494 [2024-10-07 07:48:30.223362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.223385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.223393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.231726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.231745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.231753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.240307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.240325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.240333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.249122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.249141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.249149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.257615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.257634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.257642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.266164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.266185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.266192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.275147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.275168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.275176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.283709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.283728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.283736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.292073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.292092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.292100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.301038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.301065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.301074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.309614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.309634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.309642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.317976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.317996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.318003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.327161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.327180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.327189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.335824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.335844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.335852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.345034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.345055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.345068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.355034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.355054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.355071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.364583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.364604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.364612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.373178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.373200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.373208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.381714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.381734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.381741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.390782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.390802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.390810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.399331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.399351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.399359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.407812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.407832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.407839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.416933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.416953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.416960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.425436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.425456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.425464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.433692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.433712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.433719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.442717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.442737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.442745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.451334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.495 [2024-10-07 07:48:30.451353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.495 [2024-10-07 07:48:30.451361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.495 [2024-10-07 07:48:30.459753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.496 [2024-10-07 07:48:30.459773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.496 [2024-10-07 07:48:30.459780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.755 [2024-10-07 07:48:30.468384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.755 [2024-10-07 07:48:30.468404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.755 [2024-10-07 07:48:30.468412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.755 [2024-10-07 07:48:30.477375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.755 [2024-10-07 07:48:30.477394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.755 [2024-10-07 07:48:30.477402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.755 [2024-10-07 07:48:30.485758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.755 [2024-10-07 07:48:30.485777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.755 [2024-10-07 07:48:30.485785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.755 [2024-10-07 07:48:30.494310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.755 [2024-10-07 07:48:30.494330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.755 [2024-10-07 07:48:30.494338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.755 [2024-10-07 07:48:30.503481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2392f90) 00:29:26.755 [2024-10-07 07:48:30.503502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.755 [2024-10-07 07:48:30.503521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.755 00:29:26.755 Latency(us) 00:29:26.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.755 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:26.755 nvme0n1 : 2.00 29516.84 115.30 0.00 0.00 4332.27 1825.65 14230.67 00:29:26.755 =================================================================================================================== 00:29:26.755 Total : 29516.84 115.30 0.00 0.00 4332.27 1825.65 14230.67 00:29:26.755 0 00:29:26.755 07:48:30 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:26.755 07:48:30 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:26.755 07:48:30 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:26.755 | .driver_specific 00:29:26.755 | .nvme_error 00:29:26.755 | .status_code 00:29:26.755 | .command_transient_transport_error' 00:29:26.755 07:48:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:26.755 07:48:30 -- host/digest.sh@71 -- # (( 231 > 0 )) 00:29:26.755 07:48:30 -- host/digest.sh@73 -- # killprocess 93898 00:29:26.755 07:48:30 -- common/autotest_common.sh@926 -- # '[' -z 93898 ']' 00:29:26.755 07:48:30 -- common/autotest_common.sh@930 -- # kill -0 93898 00:29:26.755 07:48:30 -- common/autotest_common.sh@931 -- # uname 00:29:27.015 07:48:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:27.015 07:48:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93898 00:29:27.015 07:48:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:27.015 07:48:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:27.015 07:48:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93898' 00:29:27.015 killing process with pid 93898 00:29:27.015 07:48:30 -- common/autotest_common.sh@945 -- # kill 93898 00:29:27.015 Received shutdown signal, test time was about 2.000000 seconds 00:29:27.015 00:29:27.015 Latency(us) 00:29:27.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.015 =================================================================================================================== 00:29:27.015 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:27.015 07:48:30 -- common/autotest_common.sh@950 -- # wait 93898 00:29:27.015 07:48:30 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:29:27.015 07:48:30 -- host/digest.sh@54 -- # local rw bs qd 00:29:27.015 07:48:30 -- host/digest.sh@56 -- # rw=randread 00:29:27.015 07:48:30 -- host/digest.sh@56 -- # bs=131072 00:29:27.015 07:48:30 -- host/digest.sh@56 -- # qd=16 00:29:27.015 07:48:30 -- host/digest.sh@58 -- # bperfpid=94590 00:29:27.015 07:48:30 -- host/digest.sh@60 -- # waitforlisten 94590 /var/tmp/bperf.sock 00:29:27.015 07:48:30 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:27.015 07:48:30 -- common/autotest_common.sh@819 -- # '[' -z 94590 ']' 00:29:27.015 07:48:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:27.015 07:48:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:27.015 07:48:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:27.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:27.015 07:48:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:27.015 07:48:30 -- common/autotest_common.sh@10 -- # set +x 00:29:27.275 [2024-10-07 07:48:31.020064] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:27.275 [2024-10-07 07:48:31.020127] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94590 ] 00:29:27.275 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:27.275 Zero copy mechanism will not be used. 00:29:27.275 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.275 [2024-10-07 07:48:31.074807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.275 [2024-10-07 07:48:31.139599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.212 07:48:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:28.212 07:48:31 -- common/autotest_common.sh@852 -- # return 0 00:29:28.212 07:48:31 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:28.212 07:48:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:28.212 07:48:32 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:28.212 07:48:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:28.212 07:48:32 -- common/autotest_common.sh@10 -- # set +x 00:29:28.212 07:48:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:28.212 07:48:32 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:28.212 07:48:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:28.472 nvme0n1 00:29:28.472 07:48:32 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:28.472 07:48:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:28.472 07:48:32 -- common/autotest_common.sh@10 -- # set +x 00:29:28.472 07:48:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:28.472 07:48:32 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:28.472 07:48:32 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:28.732 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:28.732 Zero copy mechanism will not be used. 00:29:28.732 Running I/O for 2 seconds... 00:29:28.732 [2024-10-07 07:48:32.509439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.732 [2024-10-07 07:48:32.509473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.732 [2024-10-07 07:48:32.509483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.732 [2024-10-07 07:48:32.519468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.732 [2024-10-07 07:48:32.519492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.732 [2024-10-07 07:48:32.519501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.732 [2024-10-07 07:48:32.528171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.732 [2024-10-07 07:48:32.528192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.732 [2024-10-07 07:48:32.528200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.732 [2024-10-07 07:48:32.535945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.732 [2024-10-07 07:48:32.535964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.732 [2024-10-07 07:48:32.535972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.732 [2024-10-07 07:48:32.543104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.732 [2024-10-07 07:48:32.543123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.732 [2024-10-07 07:48:32.543131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.732 [2024-10-07 07:48:32.549840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.732 [2024-10-07 07:48:32.549859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.732 [2024-10-07 07:48:32.549866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.732 [2024-10-07 07:48:32.556277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.732 [2024-10-07 07:48:32.556302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.732 [2024-10-07 07:48:32.556309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.732 [2024-10-07 07:48:32.562574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.732 [2024-10-07 07:48:32.562593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.732 [2024-10-07 07:48:32.562601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.732 [2024-10-07 07:48:32.568707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.732 [2024-10-07 07:48:32.568727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.732 [2024-10-07 07:48:32.568735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.732 [2024-10-07 07:48:32.575612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.732 [2024-10-07 07:48:32.575631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.732 [2024-10-07 07:48:32.575639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.732 [2024-10-07 07:48:32.583756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.732 [2024-10-07 07:48:32.583778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.732 [2024-10-07 07:48:32.583786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.732 [2024-10-07 07:48:32.591161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.732 [2024-10-07 07:48:32.591182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.732 [2024-10-07 07:48:32.591190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.732 [2024-10-07 07:48:32.601988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.732 [2024-10-07 07:48:32.602008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.732 [2024-10-07 07:48:32.602016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.732 [2024-10-07 07:48:32.612545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.733 [2024-10-07 07:48:32.612565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.733 [2024-10-07 07:48:32.612573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.733 [2024-10-07 07:48:32.622301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.733 [2024-10-07 07:48:32.622321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.733 [2024-10-07 07:48:32.622329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.733 [2024-10-07 07:48:32.631753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.733 [2024-10-07 07:48:32.631773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.733 [2024-10-07 07:48:32.631781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.733 [2024-10-07 07:48:32.643943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.733 [2024-10-07 07:48:32.643964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.733 [2024-10-07 07:48:32.643972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.733 [2024-10-07 07:48:32.653480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.733 [2024-10-07 07:48:32.653501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.733 [2024-10-07 07:48:32.653510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.733 [2024-10-07 07:48:32.663897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.733 [2024-10-07 07:48:32.663917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.733 [2024-10-07 07:48:32.663925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.733 [2024-10-07 07:48:32.672449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.733 [2024-10-07 07:48:32.672470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.733 [2024-10-07 07:48:32.672478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.733 [2024-10-07 07:48:32.681845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.733 [2024-10-07 07:48:32.681865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.733 [2024-10-07 07:48:32.681874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.733 [2024-10-07 07:48:32.692751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.733 [2024-10-07 07:48:32.692771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.733 [2024-10-07 07:48:32.692778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.993 [2024-10-07 07:48:32.702912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.993 [2024-10-07 07:48:32.702932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.993 [2024-10-07 07:48:32.702940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.993 [2024-10-07 07:48:32.712414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.993 [2024-10-07 07:48:32.712434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.993 [2024-10-07 07:48:32.712445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.993 [2024-10-07 07:48:32.721836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.993 [2024-10-07 07:48:32.721859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.993 [2024-10-07 07:48:32.721868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.993 [2024-10-07 07:48:32.731683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.993 [2024-10-07 07:48:32.731704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.993 [2024-10-07 07:48:32.731712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.993 [2024-10-07 07:48:32.739942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.993 [2024-10-07 07:48:32.739963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.993 [2024-10-07 07:48:32.739972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.993 [2024-10-07 07:48:32.750694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.993 [2024-10-07 07:48:32.750715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.993 [2024-10-07 07:48:32.750723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.993 [2024-10-07 07:48:32.762466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.993 [2024-10-07 07:48:32.762487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.993 [2024-10-07 07:48:32.762495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.993 [2024-10-07 07:48:32.770706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.993 [2024-10-07 07:48:32.770726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.993 [2024-10-07 07:48:32.770735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.993 [2024-10-07 07:48:32.778856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.993 [2024-10-07 07:48:32.778877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.993 [2024-10-07 07:48:32.778885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.993 [2024-10-07 07:48:32.787743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.993 [2024-10-07 07:48:32.787764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.993 [2024-10-07 07:48:32.787772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.993 [2024-10-07 07:48:32.799489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.993 [2024-10-07 07:48:32.799510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.993 [2024-10-07 07:48:32.799518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.807794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.807815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.807823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.815627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.815648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.815656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.824817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.824838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.824846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.832513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.832535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.832543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.840820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.840841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.840851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.849303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.849324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.849333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.858083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.858106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.858114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.865587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.865609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.865624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.873429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.873450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.873458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.880761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.880782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.880791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.888602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.888623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.888632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.896457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.896479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.896487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.903631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.903652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.903661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.913032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.913052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.913066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.921802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.921824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.921832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.930407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.930428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.930436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.938661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.938686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.938694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.946140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.946160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.946168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.994 [2024-10-07 07:48:32.953553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:28.994 [2024-10-07 07:48:32.953574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.994 [2024-10-07 07:48:32.953582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.254 [2024-10-07 07:48:32.963273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.254 [2024-10-07 07:48:32.963295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.254 [2024-10-07 07:48:32.963303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.254 [2024-10-07 07:48:32.972062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.254 [2024-10-07 07:48:32.972082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.254 [2024-10-07 07:48:32.972091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.254 [2024-10-07 07:48:32.981377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.254 [2024-10-07 07:48:32.981398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.254 [2024-10-07 07:48:32.981406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.254 [2024-10-07 07:48:32.988826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:32.988846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:32.988853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:32.993284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:32.993305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:32.993313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.000508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.000530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.000538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.008393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.008414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.008422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.020990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.021011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.021018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.030719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.030741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.030749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.039797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.039817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.039825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.049214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.049235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.049242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.057745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.057766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.057774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.068707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.068727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.068735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.078159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.078179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.078187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.088390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.088411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.088422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.097255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.097276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.097284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.105831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.105853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.105861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.113957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.113977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.113985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.121680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.121701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.121709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.129368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.129390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.129398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.137024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.137046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.137054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.145293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.145315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.145323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.154204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.154224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.154232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.162683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.162708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.162716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.255 [2024-10-07 07:48:33.171076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.255 [2024-10-07 07:48:33.171097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.255 [2024-10-07 07:48:33.171105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.256 [2024-10-07 07:48:33.180878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.256 [2024-10-07 07:48:33.180900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.256 [2024-10-07 07:48:33.180908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.256 [2024-10-07 07:48:33.189299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.256 [2024-10-07 07:48:33.189321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.256 [2024-10-07 07:48:33.189329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.256 [2024-10-07 07:48:33.197947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.256 [2024-10-07 07:48:33.197968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.256 [2024-10-07 07:48:33.197975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.256 [2024-10-07 07:48:33.206875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.256 [2024-10-07 07:48:33.206896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.256 [2024-10-07 07:48:33.206904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.256 [2024-10-07 07:48:33.216264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.256 [2024-10-07 07:48:33.216285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.256 [2024-10-07 07:48:33.216293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.515 [2024-10-07 07:48:33.225931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.515 [2024-10-07 07:48:33.225953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.515 [2024-10-07 07:48:33.225961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.515 [2024-10-07 07:48:33.235543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.515 [2024-10-07 07:48:33.235564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.515 [2024-10-07 07:48:33.235572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.515 [2024-10-07 07:48:33.245398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.515 [2024-10-07 07:48:33.245419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.515 [2024-10-07 07:48:33.245427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.515 [2024-10-07 07:48:33.254651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.515 [2024-10-07 07:48:33.254672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.515 [2024-10-07 07:48:33.254681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.515 [2024-10-07 07:48:33.266668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.515 [2024-10-07 07:48:33.266690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.515 [2024-10-07 07:48:33.266698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.515 [2024-10-07 07:48:33.276541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.515 [2024-10-07 07:48:33.276562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.515 [2024-10-07 07:48:33.276571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.515 [2024-10-07 07:48:33.286803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.515 [2024-10-07 07:48:33.286824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.286832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.296997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.297018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.297026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.306514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.306535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.306543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.317247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.317268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.317276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.327519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.327541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.327552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.336764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.336786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.336793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.346014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.346036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.346044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.354695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.354716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.354724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.364918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.364939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.364947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.375155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.375176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.375184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.384378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.384400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.384408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.395737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.395757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.395765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.405392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.405414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.405423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.415727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.415749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.415757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.425361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.425383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.425391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.435167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.435188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.435196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.444147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.444168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.444176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.452263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.452284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.452291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.460636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.460657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.460665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.470222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.470244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.470252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.516 [2024-10-07 07:48:33.478261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.516 [2024-10-07 07:48:33.478282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.516 [2024-10-07 07:48:33.478290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.486790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.486812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.486823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.494525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.494545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.494553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.501423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.501444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.501451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.508116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.508137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.508145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.514593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.514614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.514622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.520977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.520998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.521005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.526654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.526675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.526683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.532228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.532249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.532257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.540278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.540299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.540306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.550839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.550864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.550871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.560352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.560373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.560380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.568997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.569017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.569024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.576722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.576742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.576750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.583286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.583307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.583315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.589685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.589705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.589713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.595842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.595862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.595870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.601425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.601446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.601454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.607512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.607533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.607540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.777 [2024-10-07 07:48:33.613164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.777 [2024-10-07 07:48:33.613185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.777 [2024-10-07 07:48:33.613192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.618331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.618351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.618359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.623774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.623794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.623801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.629290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.629310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.629317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.634850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.634870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.634877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.640361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.640381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.640388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.646016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.646035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.646043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.651740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.651761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.651768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.657357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.657377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.657388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.662071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.662090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.662097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.667451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.667471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.667478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.672836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.672856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.672863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.678257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.678277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.678284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.683558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.683578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.683585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.689050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.689075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.689082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.694512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.694532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.694539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.699874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.699896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.699903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.705437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.705460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.705467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.710829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.710849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.710857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.716256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.716287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.716295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.721626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.721646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.721654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.778 [2024-10-07 07:48:33.727078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.778 [2024-10-07 07:48:33.727097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.778 [2024-10-07 07:48:33.727105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.779 [2024-10-07 07:48:33.732564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.779 [2024-10-07 07:48:33.732583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.779 [2024-10-07 07:48:33.732591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.779 [2024-10-07 07:48:33.738003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.779 [2024-10-07 07:48:33.738023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.779 [2024-10-07 07:48:33.738030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.779 [2024-10-07 07:48:33.743422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:29.779 [2024-10-07 07:48:33.743443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.779 [2024-10-07 07:48:33.743450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.748848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.748869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.748877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.754254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.754275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.754282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.759516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.759539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.759547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.764579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.764599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.764607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.769433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.769453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.769461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.774840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.774860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.774867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.780492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.780513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.780521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.786097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.786118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.786127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.791631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.791652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.791659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.797048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.797074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.797086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.802406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.802425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.802433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.807826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.807846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.807853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.813251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.813272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.813279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.818607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.818627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.818635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.823932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.823951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.823958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.829386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.829406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.829413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.834840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.834859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.834867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.840282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.840302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.840309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.845741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.845768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.845775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.852053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.040 [2024-10-07 07:48:33.852078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.040 [2024-10-07 07:48:33.852101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.040 [2024-10-07 07:48:33.856916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.856936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.856943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.861357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.861377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.861384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.865393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.865414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.865422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.869323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.869344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.869351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.873182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.873207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.873214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.877747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.877768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.877775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.883139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.883160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.883170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.888491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.888511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.888519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.893883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.893903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.893910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.899292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.899311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.899319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.904654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.904674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.904681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.909963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.909983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.909990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.915212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.915233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.915240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.920352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.920372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.920380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.925691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.925712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.925720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.931173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.931197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.931205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.936566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.936586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.936594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.942054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.942080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.942088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.947411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.947431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.947439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.952828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.952848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.952856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.958225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.958245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.958253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.041 [2024-10-07 07:48:33.963650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.041 [2024-10-07 07:48:33.963671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.041 [2024-10-07 07:48:33.963678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.042 [2024-10-07 07:48:33.969138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.042 [2024-10-07 07:48:33.969158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.042 [2024-10-07 07:48:33.969166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.042 [2024-10-07 07:48:33.974524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.042 [2024-10-07 07:48:33.974544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.042 [2024-10-07 07:48:33.974552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.042 [2024-10-07 07:48:33.979917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.042 [2024-10-07 07:48:33.979937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.042 [2024-10-07 07:48:33.979944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.042 [2024-10-07 07:48:33.985379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.042 [2024-10-07 07:48:33.985399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.042 [2024-10-07 07:48:33.985407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.042 [2024-10-07 07:48:33.990705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.042 [2024-10-07 07:48:33.990725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.042 [2024-10-07 07:48:33.990732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.042 [2024-10-07 07:48:33.996150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.042 [2024-10-07 07:48:33.996170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.042 [2024-10-07 07:48:33.996178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.042 [2024-10-07 07:48:34.001641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.042 [2024-10-07 07:48:34.001666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.042 [2024-10-07 07:48:34.001674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.042 [2024-10-07 07:48:34.006978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.042 [2024-10-07 07:48:34.006998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.042 [2024-10-07 07:48:34.007006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.302 [2024-10-07 07:48:34.012387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.302 [2024-10-07 07:48:34.012407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.302 [2024-10-07 07:48:34.012414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.302 [2024-10-07 07:48:34.017938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.302 [2024-10-07 07:48:34.017958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.017965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.023452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.023471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.023482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.028897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.028917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.028924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.034915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.034936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.034943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.041039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.041065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.041074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.046487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.046507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.046516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.051900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.051920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.051927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.057456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.057476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.057483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.062900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.062920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.062928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.068318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.068337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.068345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.073834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.073859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.073866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.079844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.079865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.079873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.086245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.086266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.086273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.092661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.092682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.092689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.099284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.099305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.099312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.105472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.105492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.105499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.111160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.111181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.111189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.116941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.116961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.116968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.122740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.122761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.122768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.128305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.128325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.128332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.133804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.133824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.133831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.139348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.303 [2024-10-07 07:48:34.139368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.303 [2024-10-07 07:48:34.139376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.303 [2024-10-07 07:48:34.144834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.144854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.144861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.150395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.150415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.150423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.155898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.155918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.155925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.161404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.161424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.161431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.166911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.166931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.166938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.172430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.172450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.172461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.178143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.178164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.178172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.184008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.184029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.184037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.189652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.189673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.189681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.195150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.195170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.195177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.200280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.200301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.200308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.205144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.205164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.205171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.209836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.209856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.209864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.214596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.214617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.214625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.219800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.219823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.219831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.225222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.225242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.225249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.230576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.230597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.230604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.235960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.235980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.235987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.241358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.241378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.241386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.246798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.246818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.246825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.252225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.252246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.304 [2024-10-07 07:48:34.252253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.304 [2024-10-07 07:48:34.257547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.304 [2024-10-07 07:48:34.257567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.305 [2024-10-07 07:48:34.257575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.305 [2024-10-07 07:48:34.262993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.305 [2024-10-07 07:48:34.263014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.305 [2024-10-07 07:48:34.263025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.305 [2024-10-07 07:48:34.269084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.305 [2024-10-07 07:48:34.269105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.305 [2024-10-07 07:48:34.269123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.565 [2024-10-07 07:48:34.274757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.565 [2024-10-07 07:48:34.274777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.565 [2024-10-07 07:48:34.274784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.565 [2024-10-07 07:48:34.280552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.565 [2024-10-07 07:48:34.280573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.565 [2024-10-07 07:48:34.280581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.565 [2024-10-07 07:48:34.286015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.565 [2024-10-07 07:48:34.286035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.565 [2024-10-07 07:48:34.286043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.565 [2024-10-07 07:48:34.290993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.565 [2024-10-07 07:48:34.291013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.565 [2024-10-07 07:48:34.291020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.565 [2024-10-07 07:48:34.294169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.565 [2024-10-07 07:48:34.294188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.565 [2024-10-07 07:48:34.294197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.565 [2024-10-07 07:48:34.299667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.565 [2024-10-07 07:48:34.299687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.565 [2024-10-07 07:48:34.299695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.565 [2024-10-07 07:48:34.305079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.565 [2024-10-07 07:48:34.305098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.305106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.310511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.310691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.310699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.316609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.316629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.316637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.322828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.322848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.322856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.329010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.329030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.329038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.336111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.336131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.336139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.342447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.342467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.342475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.348578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.348597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.348604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.354400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.354419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.354426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.360358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.360377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.360384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.366237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.366256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.366264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.371848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.371867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.371875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.377502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.377521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.377528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.383165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.383184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.383192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.389389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.389409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.389416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.394152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.394171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.394178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.400432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.400452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.400459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.406209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.406229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.406236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.411600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.411620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.411631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.416045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.416069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.416077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.420975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.420995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.421003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.566 [2024-10-07 07:48:34.425953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.566 [2024-10-07 07:48:34.425973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.566 [2024-10-07 07:48:34.425981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.567 [2024-10-07 07:48:34.431728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.567 [2024-10-07 07:48:34.431749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.567 [2024-10-07 07:48:34.431757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.567 [2024-10-07 07:48:34.436873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.567 [2024-10-07 07:48:34.436894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.567 [2024-10-07 07:48:34.436902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.567 [2024-10-07 07:48:34.442210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.567 [2024-10-07 07:48:34.442231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.567 [2024-10-07 07:48:34.442239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.567 [2024-10-07 07:48:34.447638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.567 [2024-10-07 07:48:34.447659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.567 [2024-10-07 07:48:34.447667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.567 [2024-10-07 07:48:34.453612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.567 [2024-10-07 07:48:34.453633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.567 [2024-10-07 07:48:34.453641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.567 [2024-10-07 07:48:34.459118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.567 [2024-10-07 07:48:34.459142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.567 [2024-10-07 07:48:34.459150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.567 [2024-10-07 07:48:34.465202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.567 [2024-10-07 07:48:34.465222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.567 [2024-10-07 07:48:34.465230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.567 [2024-10-07 07:48:34.471655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.567 [2024-10-07 07:48:34.471675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.567 [2024-10-07 07:48:34.471683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.567 [2024-10-07 07:48:34.477906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.567 [2024-10-07 07:48:34.477926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.567 [2024-10-07 07:48:34.477934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.567 [2024-10-07 07:48:34.484472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.567 [2024-10-07 07:48:34.484493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.567 [2024-10-07 07:48:34.484500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.567 [2024-10-07 07:48:34.490858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc65f0) 00:29:30.567 [2024-10-07 07:48:34.490878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.567 [2024-10-07 07:48:34.490886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.567 00:29:30.567 Latency(us) 00:29:30.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.567 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:30.567 nvme0n1 : 2.00 4473.36 559.17 0.00 0.00 3574.34 522.73 15416.56 00:29:30.567 =================================================================================================================== 00:29:30.567 Total : 4473.36 559.17 0.00 0.00 3574.34 522.73 15416.56 00:29:30.567 0 00:29:30.567 07:48:34 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:30.567 07:48:34 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:30.567 07:48:34 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:30.567 | .driver_specific 00:29:30.567 | .nvme_error 00:29:30.567 | .status_code 00:29:30.567 | .command_transient_transport_error' 00:29:30.567 07:48:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:30.827 07:48:34 -- host/digest.sh@71 -- # (( 288 > 0 )) 00:29:30.827 07:48:34 -- host/digest.sh@73 -- # killprocess 94590 00:29:30.827 07:48:34 -- common/autotest_common.sh@926 -- # '[' -z 94590 ']' 00:29:30.827 07:48:34 -- common/autotest_common.sh@930 -- # kill -0 94590 00:29:30.827 07:48:34 -- common/autotest_common.sh@931 -- # uname 00:29:30.827 07:48:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:30.827 07:48:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94590 00:29:30.827 07:48:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:30.827 07:48:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:30.827 07:48:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94590' 00:29:30.827 killing process with pid 94590 00:29:30.827 07:48:34 -- common/autotest_common.sh@945 -- # kill 94590 00:29:30.827 Received shutdown signal, test time was about 2.000000 seconds 00:29:30.827 00:29:30.827 Latency(us) 00:29:30.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.827 =================================================================================================================== 00:29:30.827 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:30.827 07:48:34 -- common/autotest_common.sh@950 -- # wait 94590 00:29:31.087 07:48:34 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:29:31.087 07:48:34 -- host/digest.sh@54 -- # local rw bs qd 00:29:31.087 07:48:34 -- host/digest.sh@56 -- # rw=randwrite 00:29:31.087 07:48:34 -- host/digest.sh@56 -- # bs=4096 00:29:31.087 07:48:34 -- host/digest.sh@56 -- # qd=128 00:29:31.087 07:48:34 -- host/digest.sh@58 -- # bperfpid=95278 00:29:31.087 07:48:34 -- host/digest.sh@60 -- # waitforlisten 95278 /var/tmp/bperf.sock 00:29:31.087 07:48:34 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:31.087 07:48:34 -- common/autotest_common.sh@819 -- # '[' -z 95278 ']' 00:29:31.087 07:48:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:31.087 07:48:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:31.087 07:48:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:31.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:31.087 07:48:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:31.087 07:48:34 -- common/autotest_common.sh@10 -- # set +x 00:29:31.087 [2024-10-07 07:48:34.996520] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:31.087 [2024-10-07 07:48:34.996569] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95278 ] 00:29:31.087 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.087 [2024-10-07 07:48:35.051577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.346 [2024-10-07 07:48:35.120685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.913 07:48:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:31.913 07:48:35 -- common/autotest_common.sh@852 -- # return 0 00:29:31.913 07:48:35 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:31.914 07:48:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:32.173 07:48:35 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:32.173 07:48:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:32.173 07:48:35 -- common/autotest_common.sh@10 -- # set +x 00:29:32.173 07:48:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:32.173 07:48:35 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.173 07:48:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.432 nvme0n1 00:29:32.432 07:48:36 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:32.432 07:48:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:32.432 07:48:36 -- common/autotest_common.sh@10 -- # set +x 00:29:32.432 07:48:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:32.432 07:48:36 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:32.432 07:48:36 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:32.691 Running I/O for 2 seconds... 00:29:32.691 [2024-10-07 07:48:36.492938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e5a90 00:29:32.691 [2024-10-07 07:48:36.493890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.691 [2024-10-07 07:48:36.493917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:32.691 [2024-10-07 07:48:36.501888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e5220 00:29:32.691 [2024-10-07 07:48:36.502805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.691 [2024-10-07 07:48:36.502827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:32.691 [2024-10-07 07:48:36.510627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e5220 00:29:32.691 [2024-10-07 07:48:36.511591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.691 [2024-10-07 07:48:36.511610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:32.691 [2024-10-07 07:48:36.519343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e5220 00:29:32.691 [2024-10-07 07:48:36.520315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.691 [2024-10-07 07:48:36.520334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.528043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e5220 00:29:32.692 [2024-10-07 07:48:36.529016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.529034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.536777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e84c0 00:29:32.692 [2024-10-07 07:48:36.537723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.537741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.545428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e1b48 00:29:32.692 [2024-10-07 07:48:36.546391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.546409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.554350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e3d08 00:29:32.692 [2024-10-07 07:48:36.555263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.555281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.562585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8618 00:29:32.692 [2024-10-07 07:48:36.563018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.563036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.571741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f0788 00:29:32.692 [2024-10-07 07:48:36.572374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.572392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.580584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f0350 00:29:32.692 [2024-10-07 07:48:36.581369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.581387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.589278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fac10 00:29:32.692 [2024-10-07 07:48:36.590023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.590041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.597934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.692 [2024-10-07 07:48:36.598695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.598714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.606616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.692 [2024-10-07 07:48:36.607406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.607425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.615304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.692 [2024-10-07 07:48:36.616115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.616134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.623987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.692 [2024-10-07 07:48:36.624811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.624829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.632673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.692 [2024-10-07 07:48:36.633462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.633483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.641299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.692 [2024-10-07 07:48:36.642103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.642120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.649961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.692 [2024-10-07 07:48:36.650772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.650790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.692 [2024-10-07 07:48:36.658748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.692 [2024-10-07 07:48:36.659607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.692 [2024-10-07 07:48:36.659625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.952 [2024-10-07 07:48:36.667823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.952 [2024-10-07 07:48:36.668687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.952 [2024-10-07 07:48:36.668705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:32.952 [2024-10-07 07:48:36.676539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.952 [2024-10-07 07:48:36.677377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.952 [2024-10-07 07:48:36.677394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:32.952 [2024-10-07 07:48:36.685208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.952 [2024-10-07 07:48:36.686053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.952 [2024-10-07 07:48:36.686073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:32.952 [2024-10-07 07:48:36.693871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.952 [2024-10-07 07:48:36.694731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.952 [2024-10-07 07:48:36.694749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:32.952 [2024-10-07 07:48:36.702544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.952 [2024-10-07 07:48:36.703432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.952 [2024-10-07 07:48:36.703450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:32.952 [2024-10-07 07:48:36.711216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.952 [2024-10-07 07:48:36.712134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.952 [2024-10-07 07:48:36.712151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:32.952 [2024-10-07 07:48:36.719889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.953 [2024-10-07 07:48:36.720812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.720829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.728408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.953 [2024-10-07 07:48:36.729306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.729323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.737077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:32.953 [2024-10-07 07:48:36.737972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.737989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.745730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190ec408 00:29:32.953 [2024-10-07 07:48:36.746637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.746656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.754646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4140 00:29:32.953 [2024-10-07 07:48:36.755605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.755623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.763363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fbcf0 00:29:32.953 [2024-10-07 07:48:36.764335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.764353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.772037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190ef270 00:29:32.953 [2024-10-07 07:48:36.772995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.773012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.780680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190ec840 00:29:32.953 [2024-10-07 07:48:36.781660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.781689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.789357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e99d8 00:29:32.953 [2024-10-07 07:48:36.790309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.790327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.798010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e6b70 00:29:32.953 [2024-10-07 07:48:36.798971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.798989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.806710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e3d08 00:29:32.953 [2024-10-07 07:48:36.807712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.807730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.815420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190eff18 00:29:32.953 [2024-10-07 07:48:36.816446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.816463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.824117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f9b30 00:29:32.953 [2024-10-07 07:48:36.825139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.825157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.832808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fbcf0 00:29:32.953 [2024-10-07 07:48:36.833644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.833661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.841501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4140 00:29:32.953 [2024-10-07 07:48:36.842370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.842388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.850125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4140 00:29:32.953 [2024-10-07 07:48:36.850997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.851015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.858795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4140 00:29:32.953 [2024-10-07 07:48:36.859684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.859707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.867465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4140 00:29:32.953 [2024-10-07 07:48:36.868381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.868398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.876146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4140 00:29:32.953 [2024-10-07 07:48:36.877087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.877105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.884865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190ef6a8 00:29:32.953 [2024-10-07 07:48:36.885891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.885909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.893539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e3060 00:29:32.953 [2024-10-07 07:48:36.894518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.894536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.902154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e27f0 00:29:32.953 [2024-10-07 07:48:36.903084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.903102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.910817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e9168 00:29:32.953 [2024-10-07 07:48:36.911850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.911868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:32.953 [2024-10-07 07:48:36.919555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4140 00:29:32.953 [2024-10-07 07:48:36.920571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.953 [2024-10-07 07:48:36.920589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.213 [2024-10-07 07:48:36.928473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e6b70 00:29:33.213 [2024-10-07 07:48:36.929488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.213 [2024-10-07 07:48:36.929506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.213 [2024-10-07 07:48:36.936473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e99d8 00:29:33.213 [2024-10-07 07:48:36.937312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.213 [2024-10-07 07:48:36.937330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:33.213 [2024-10-07 07:48:36.945157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f0350 00:29:33.213 [2024-10-07 07:48:36.945992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.213 [2024-10-07 07:48:36.946010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:33.213 [2024-10-07 07:48:36.953856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e6b70 00:29:33.213 [2024-10-07 07:48:36.954701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.213 [2024-10-07 07:48:36.954719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:33.213 [2024-10-07 07:48:36.962813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f1868 00:29:33.213 [2024-10-07 07:48:36.963555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.213 [2024-10-07 07:48:36.963572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:33.213 [2024-10-07 07:48:36.971501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f1430 00:29:33.213 [2024-10-07 07:48:36.972297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.213 [2024-10-07 07:48:36.972316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:33.213 [2024-10-07 07:48:36.980143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190ee5c8 00:29:33.213 [2024-10-07 07:48:36.980948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.213 [2024-10-07 07:48:36.980966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:33.213 [2024-10-07 07:48:36.988842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f92c0 00:29:33.213 [2024-10-07 07:48:36.989651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.213 [2024-10-07 07:48:36.989668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:33.213 [2024-10-07 07:48:36.997837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.213 [2024-10-07 07:48:36.998565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.213 [2024-10-07 07:48:36.998585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:33.213 [2024-10-07 07:48:37.006808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.213 [2024-10-07 07:48:37.007574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.007592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.015480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.016261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.016280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.024171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.024948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.024965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.032864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.033625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.033642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.041471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.042239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.042256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.050135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.050908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.050925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.058773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.059590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.059607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.067436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.068265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.068283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.076100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.076931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.076948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.084709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.085522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.085542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.093323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.094147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.094165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.101968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.102802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.102820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.110623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.111489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.111507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.119323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.120209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.120226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.127974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.128865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.128882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.136667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.137532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.137549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.145302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.146178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.146196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.153960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.154845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.154863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.162641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.163538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.163556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.171323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e0a68 00:29:33.214 [2024-10-07 07:48:37.172244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.172261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:33.214 [2024-10-07 07:48:37.180075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2d80 00:29:33.214 [2024-10-07 07:48:37.181019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.214 [2024-10-07 07:48:37.181037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:33.474 [2024-10-07 07:48:37.188968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fa3a0 00:29:33.474 [2024-10-07 07:48:37.189908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.474 [2024-10-07 07:48:37.189926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:33.474 [2024-10-07 07:48:37.197667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e84c0 00:29:33.474 [2024-10-07 07:48:37.198595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.474 [2024-10-07 07:48:37.198613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:33.474 [2024-10-07 07:48:37.206291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8618 00:29:33.474 [2024-10-07 07:48:37.207285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.474 [2024-10-07 07:48:37.207303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:33.474 [2024-10-07 07:48:37.214939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f7da8 00:29:33.474 [2024-10-07 07:48:37.215921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.474 [2024-10-07 07:48:37.215939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:33.474 [2024-10-07 07:48:37.223566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f7970 00:29:33.474 [2024-10-07 07:48:37.224620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.474 [2024-10-07 07:48:37.224637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:33.474 [2024-10-07 07:48:37.232248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190eff18 00:29:33.475 [2024-10-07 07:48:37.233435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.233453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.240582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190ec840 00:29:33.475 [2024-10-07 07:48:37.240857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.240874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.249321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e6fa8 00:29:33.475 [2024-10-07 07:48:37.249664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.249682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.259224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8e88 00:29:33.475 [2024-10-07 07:48:37.260220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.260238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.266897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190ee190 00:29:33.475 [2024-10-07 07:48:37.267811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.267829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.275788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f7da8 00:29:33.475 [2024-10-07 07:48:37.276515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.276533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.284486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc998 00:29:33.475 [2024-10-07 07:48:37.285257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.285274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.293366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4578 00:29:33.475 [2024-10-07 07:48:37.294222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.294239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.301987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4578 00:29:33.475 [2024-10-07 07:48:37.302871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.302889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.310687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4578 00:29:33.475 [2024-10-07 07:48:37.311570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.311590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.319360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4578 00:29:33.475 [2024-10-07 07:48:37.320208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.320225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.327987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4578 00:29:33.475 [2024-10-07 07:48:37.328845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.328862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.336626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4578 00:29:33.475 [2024-10-07 07:48:37.337492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.337509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.345317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4578 00:29:33.475 [2024-10-07 07:48:37.346261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.346279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.354008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4578 00:29:33.475 [2024-10-07 07:48:37.354929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.354947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.362679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4578 00:29:33.475 [2024-10-07 07:48:37.363631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.363650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.371399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4578 00:29:33.475 [2024-10-07 07:48:37.372316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.372334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.380002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e5a90 00:29:33.475 [2024-10-07 07:48:37.380974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.380992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.388676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.475 [2024-10-07 07:48:37.389567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.389585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.397313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f3a28 00:29:33.475 [2024-10-07 07:48:37.398230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.398248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.406239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f3a28 00:29:33.475 [2024-10-07 07:48:37.407149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.407167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.414315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e84c0 00:29:33.475 [2024-10-07 07:48:37.415142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.415160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.422985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e84c0 00:29:33.475 [2024-10-07 07:48:37.423841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.423858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.431862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fac10 00:29:33.475 [2024-10-07 07:48:37.432766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.432785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:33.475 [2024-10-07 07:48:37.440610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fac10 00:29:33.475 [2024-10-07 07:48:37.441530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.475 [2024-10-07 07:48:37.441549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:33.735 [2024-10-07 07:48:37.449794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f6cc8 00:29:33.735 [2024-10-07 07:48:37.450649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.735 [2024-10-07 07:48:37.450668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:33.735 [2024-10-07 07:48:37.458552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190ef6a8 00:29:33.735 [2024-10-07 07:48:37.459404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.735 [2024-10-07 07:48:37.459423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:33.735 [2024-10-07 07:48:37.467227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2d80 00:29:33.735 [2024-10-07 07:48:37.468106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.735 [2024-10-07 07:48:37.468124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:33.735 [2024-10-07 07:48:37.475963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fa3a0 00:29:33.735 [2024-10-07 07:48:37.476807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.735 [2024-10-07 07:48:37.476825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:33.735 [2024-10-07 07:48:37.484978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.735 [2024-10-07 07:48:37.485606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.735 [2024-10-07 07:48:37.485624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:33.735 [2024-10-07 07:48:37.493836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.735 [2024-10-07 07:48:37.494600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.494617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.502657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.736 [2024-10-07 07:48:37.503440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.503458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.511501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.736 [2024-10-07 07:48:37.512296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.512324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.520206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.736 [2024-10-07 07:48:37.521002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.521020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.528862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.736 [2024-10-07 07:48:37.529683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.529700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.537596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.736 [2024-10-07 07:48:37.538383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.538404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.546228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.736 [2024-10-07 07:48:37.547023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.547040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.554877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.736 [2024-10-07 07:48:37.555683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.555700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.563561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.736 [2024-10-07 07:48:37.564397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.564416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.572268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.736 [2024-10-07 07:48:37.573091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.573125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.580962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.736 [2024-10-07 07:48:37.581831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.581848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.589680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.736 [2024-10-07 07:48:37.590566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.590584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.598410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.736 [2024-10-07 07:48:37.599259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.599276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.607044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.736 [2024-10-07 07:48:37.607927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.607945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.615749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.736 [2024-10-07 07:48:37.616622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.616643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.624391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f8a50 00:29:33.736 [2024-10-07 07:48:37.625273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.625291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.633020] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190ea680 00:29:33.736 [2024-10-07 07:48:37.633959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.633977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.641722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f0788 00:29:33.736 [2024-10-07 07:48:37.642662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.642680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.650275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fa3a0 00:29:33.736 [2024-10-07 07:48:37.651171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.651189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.658939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f9f68 00:29:33.736 [2024-10-07 07:48:37.660090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.660108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.667619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f7da8 00:29:33.736 [2024-10-07 07:48:37.668828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.668846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.676388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f7da8 00:29:33.736 [2024-10-07 07:48:37.677384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.677402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.685052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f7da8 00:29:33.736 [2024-10-07 07:48:37.686126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.686144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.693332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e9e10 00:29:33.736 [2024-10-07 07:48:37.694131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.694149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.736 [2024-10-07 07:48:37.702110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4140 00:29:33.736 [2024-10-07 07:48:37.702914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.736 [2024-10-07 07:48:37.702931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:33.996 [2024-10-07 07:48:37.711048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f6890 00:29:33.996 [2024-10-07 07:48:37.711866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.996 [2024-10-07 07:48:37.711884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.996 [2024-10-07 07:48:37.719784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e3498 00:29:33.996 [2024-10-07 07:48:37.720595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.996 [2024-10-07 07:48:37.720613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:33.996 [2024-10-07 07:48:37.728838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f46d0 00:29:33.996 [2024-10-07 07:48:37.729505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.996 [2024-10-07 07:48:37.729523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:33.996 [2024-10-07 07:48:37.737713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.996 [2024-10-07 07:48:37.738459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.996 [2024-10-07 07:48:37.738476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:33.996 [2024-10-07 07:48:37.746419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.996 [2024-10-07 07:48:37.747155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.996 [2024-10-07 07:48:37.747174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.996 [2024-10-07 07:48:37.755138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.996 [2024-10-07 07:48:37.755879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.996 [2024-10-07 07:48:37.755896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:33.996 [2024-10-07 07:48:37.764022] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.996 [2024-10-07 07:48:37.764747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.996 [2024-10-07 07:48:37.764765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.772666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.773396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.773414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.781359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.782099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.782118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.790051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.790828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.790846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.798703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.799487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.799504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.807387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.808188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.808206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.816054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.816857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.816874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.824701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.825482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.825500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.833336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.834159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.834177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.842064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.842860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.842881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.850694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.851595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.851613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.859461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.860326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.860344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.868167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.869019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.869036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.876889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.877729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.877746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.885586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.886430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.886448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.894257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.895125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.895143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.902966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.903864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.903882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.911660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.912686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.912704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.920345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.921230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.921248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.928995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:33.997 [2024-10-07 07:48:37.929915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.929932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.937696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fbcf0 00:29:33.997 [2024-10-07 07:48:37.938638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.938656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.946326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e3d08 00:29:33.997 [2024-10-07 07:48:37.947328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.947346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.954997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e99d8 00:29:33.997 [2024-10-07 07:48:37.956020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.956039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:33.997 [2024-10-07 07:48:37.963753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e7c50 00:29:33.997 [2024-10-07 07:48:37.964675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.997 [2024-10-07 07:48:37.964693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:37.972152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190ec408 00:29:34.266 [2024-10-07 07:48:37.973066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:37.973085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:37.982242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:34.266 [2024-10-07 07:48:37.983554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:37.983572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:37.991251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f6cc8 00:29:34.266 [2024-10-07 07:48:37.992627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:37.992645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:37.998925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e4140 00:29:34.266 [2024-10-07 07:48:38.000129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.000146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.007506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e3498 00:29:34.266 [2024-10-07 07:48:38.008910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.008928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.016333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e6b70 00:29:34.266 [2024-10-07 07:48:38.017194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.017212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.024948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.266 [2024-10-07 07:48:38.025885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.025903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.034344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e9e10 00:29:34.266 [2024-10-07 07:48:38.035566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.035584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.043053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:34.266 [2024-10-07 07:48:38.044287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.044315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.051722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190ed4e8 00:29:34.266 [2024-10-07 07:48:38.052943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.052961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.060356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fbcf0 00:29:34.266 [2024-10-07 07:48:38.061567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.061585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.068961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc998 00:29:34.266 [2024-10-07 07:48:38.070178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.070199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.076736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190edd58 00:29:34.266 [2024-10-07 07:48:38.077672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.077690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.086504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc998 00:29:34.266 [2024-10-07 07:48:38.087207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.087225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.095163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190ed4e8 00:29:34.266 [2024-10-07 07:48:38.095846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.095864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.103775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.266 [2024-10-07 07:48:38.104510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.104527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.112422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:34.266 [2024-10-07 07:48:38.113255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.113273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.120375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190eea00 00:29:34.266 [2024-10-07 07:48:38.121545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.121562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.129281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190eaef0 00:29:34.266 [2024-10-07 07:48:38.130183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.130201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.137910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e9e10 00:29:34.266 [2024-10-07 07:48:38.138769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.138787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.147216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f2948 00:29:34.266 [2024-10-07 07:48:38.147862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.147880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.154794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e9e10 00:29:34.266 [2024-10-07 07:48:38.155715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.155733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.163430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f6cc8 00:29:34.266 [2024-10-07 07:48:38.164566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.164584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.172042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f6cc8 00:29:34.266 [2024-10-07 07:48:38.172921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.172938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.180635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f6cc8 00:29:34.266 [2024-10-07 07:48:38.181519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.181536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.189283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190eea00 00:29:34.266 [2024-10-07 07:48:38.190199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.190217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.198327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fa7d8 00:29:34.266 [2024-10-07 07:48:38.199590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.199607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.206080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f5378 00:29:34.266 [2024-10-07 07:48:38.206743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.206760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.214736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190eaab8 00:29:34.266 [2024-10-07 07:48:38.215485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.215503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.223432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f96f8 00:29:34.266 [2024-10-07 07:48:38.224181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.224200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:34.266 [2024-10-07 07:48:38.232263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190fc128 00:29:34.266 [2024-10-07 07:48:38.233008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.266 [2024-10-07 07:48:38.233026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.241117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190ed4e8 00:29:34.529 [2024-10-07 07:48:38.241892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.241910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.250081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f6890 00:29:34.529 [2024-10-07 07:48:38.250549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.250567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.258863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f6020 00:29:34.529 [2024-10-07 07:48:38.259520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.259538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.267832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f5378 00:29:34.529 [2024-10-07 07:48:38.268511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.268528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.276479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e5ec8 00:29:34.529 [2024-10-07 07:48:38.277121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.277139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.285163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.285840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.285858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.293838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.294498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.294518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.302506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.303192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.303210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.311180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.311884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.311901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.319819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.320540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.320558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.328468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.329163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.329180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.337126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.337824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.337841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.345737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.346447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.346465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.354443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.355183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.355200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.363131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.363891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.363908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.371819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.372598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.372615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.380490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.381239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.381257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.389156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.389935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.389953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.397834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.398600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.398617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.406445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.407235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.407253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.415132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.529 [2024-10-07 07:48:38.415946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.529 [2024-10-07 07:48:38.415963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:34.529 [2024-10-07 07:48:38.423828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.530 [2024-10-07 07:48:38.424675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.530 [2024-10-07 07:48:38.424692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:34.530 [2024-10-07 07:48:38.432500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.530 [2024-10-07 07:48:38.433328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.530 [2024-10-07 07:48:38.433345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:34.530 [2024-10-07 07:48:38.441289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.530 [2024-10-07 07:48:38.442098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.530 [2024-10-07 07:48:38.442115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:34.530 [2024-10-07 07:48:38.449968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.530 [2024-10-07 07:48:38.450807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.530 [2024-10-07 07:48:38.450825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:34.530 [2024-10-07 07:48:38.458677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.530 [2024-10-07 07:48:38.459533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.530 [2024-10-07 07:48:38.459550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:34.530 [2024-10-07 07:48:38.467367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.530 [2024-10-07 07:48:38.468315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.530 [2024-10-07 07:48:38.468332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:34.530 [2024-10-07 07:48:38.476468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190e38d0 00:29:34.530 [2024-10-07 07:48:38.477342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.530 [2024-10-07 07:48:38.477360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:34.530 [2024-10-07 07:48:38.485335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a1e0) with pdu=0x2000190f0ff8 00:29:34.530 [2024-10-07 07:48:38.485858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.530 [2024-10-07 07:48:38.485875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:34.530 00:29:34.530 Latency(us) 00:29:34.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.530 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.530 nvme0n1 : 2.00 29275.80 114.36 0.00 0.00 4367.55 2044.10 12483.05 00:29:34.530 =================================================================================================================== 00:29:34.530 Total : 29275.80 114.36 0.00 0.00 4367.55 2044.10 12483.05 00:29:34.530 0 00:29:34.789 07:48:38 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:34.789 07:48:38 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:34.789 07:48:38 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:34.789 | .driver_specific 00:29:34.789 | .nvme_error 00:29:34.789 | .status_code 00:29:34.789 | .command_transient_transport_error' 00:29:34.789 07:48:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:34.789 07:48:38 -- host/digest.sh@71 -- # (( 230 > 0 )) 00:29:34.789 07:48:38 -- host/digest.sh@73 -- # killprocess 95278 00:29:34.789 07:48:38 -- common/autotest_common.sh@926 -- # '[' -z 95278 ']' 00:29:34.789 07:48:38 -- common/autotest_common.sh@930 -- # kill -0 95278 00:29:34.789 07:48:38 -- common/autotest_common.sh@931 -- # uname 00:29:34.789 07:48:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:34.789 07:48:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95278 00:29:34.789 07:48:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:34.789 07:48:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:34.789 07:48:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95278' 00:29:34.789 killing process with pid 95278 00:29:34.789 07:48:38 -- common/autotest_common.sh@945 -- # kill 95278 00:29:34.789 Received shutdown signal, test time was about 2.000000 seconds 00:29:34.789 00:29:34.789 Latency(us) 00:29:34.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.789 =================================================================================================================== 00:29:34.789 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.789 07:48:38 -- common/autotest_common.sh@950 -- # wait 95278 00:29:35.047 07:48:38 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:29:35.047 07:48:38 -- host/digest.sh@54 -- # local rw bs qd 00:29:35.047 07:48:38 -- host/digest.sh@56 -- # rw=randwrite 00:29:35.047 07:48:38 -- host/digest.sh@56 -- # bs=131072 00:29:35.047 07:48:38 -- host/digest.sh@56 -- # qd=16 00:29:35.047 07:48:38 -- host/digest.sh@58 -- # bperfpid=95854 00:29:35.047 07:48:38 -- host/digest.sh@60 -- # waitforlisten 95854 /var/tmp/bperf.sock 00:29:35.047 07:48:38 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:35.047 07:48:38 -- common/autotest_common.sh@819 -- # '[' -z 95854 ']' 00:29:35.047 07:48:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:35.047 07:48:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:35.047 07:48:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:35.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:35.047 07:48:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:35.047 07:48:38 -- common/autotest_common.sh@10 -- # set +x 00:29:35.047 [2024-10-07 07:48:38.990722] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:35.047 [2024-10-07 07:48:38.990772] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95854 ] 00:29:35.047 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:35.047 Zero copy mechanism will not be used. 00:29:35.047 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.305 [2024-10-07 07:48:39.047448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.305 [2024-10-07 07:48:39.120231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.874 07:48:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:35.874 07:48:39 -- common/autotest_common.sh@852 -- # return 0 00:29:35.874 07:48:39 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:35.874 07:48:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:36.133 07:48:39 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:36.133 07:48:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.133 07:48:39 -- common/autotest_common.sh@10 -- # set +x 00:29:36.133 07:48:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.133 07:48:39 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.133 07:48:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.392 nvme0n1 00:29:36.392 07:48:40 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:36.392 07:48:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.392 07:48:40 -- common/autotest_common.sh@10 -- # set +x 00:29:36.392 07:48:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.392 07:48:40 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:36.392 07:48:40 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:36.652 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:36.652 Zero copy mechanism will not be used. 00:29:36.652 Running I/O for 2 seconds... 00:29:36.652 [2024-10-07 07:48:40.390612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.652 [2024-10-07 07:48:40.390851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-10-07 07:48:40.390881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.652 [2024-10-07 07:48:40.400517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.652 [2024-10-07 07:48:40.400722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-10-07 07:48:40.400761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.652 [2024-10-07 07:48:40.406853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.652 [2024-10-07 07:48:40.407080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-10-07 07:48:40.407103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.652 [2024-10-07 07:48:40.413050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.652 [2024-10-07 07:48:40.413197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.652 [2024-10-07 07:48:40.413217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.652 [2024-10-07 07:48:40.417587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.652 [2024-10-07 07:48:40.417708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.417728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.422781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.422939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.422959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.428643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.428777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.428797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.434000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.434178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.434198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.439719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.439991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.440010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.444795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.444884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.444902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.449961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.450027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.450045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.455262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.455424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.455444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.461262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.461350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.461368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.466708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.466793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.466811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.471396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.471555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.471574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.475852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.476139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.476157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.480148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.480353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.480372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.484449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.484593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.484612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.489374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.489477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.489496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.493785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.493899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.493919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.498269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.498339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.498358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.502959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.503022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.503041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.507369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.507516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.507535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.511443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.511720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.511738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.515412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.515662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.515681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.519334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.519510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.519533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.523378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.523489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.523507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.527513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.527634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.527651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.532124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.532206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.532225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.536018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.536207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.536226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.539889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.540099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.653 [2024-10-07 07:48:40.540117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.653 [2024-10-07 07:48:40.543997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.653 [2024-10-07 07:48:40.544238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.544256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.548226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.548518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.548537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.552918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.553103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.553122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.558036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.558176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.558195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.563391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.563457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.563475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.569264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.569378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.569396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.573682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.573842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.573861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.577636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.577771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.577789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.581993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.582262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.582281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.586474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.586724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.586743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.590534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.590737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.590756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.594513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.594673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.594692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.598421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.598524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.598542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.602290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.602419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.602438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.606162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.606350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.606369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.610031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.610181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.610199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.614122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.614367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.614386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.654 [2024-10-07 07:48:40.618132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.654 [2024-10-07 07:48:40.618394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.654 [2024-10-07 07:48:40.618412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.622414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.622610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.622628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.626298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.626404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.626422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.630094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.630198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.630223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.633873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.633976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.633994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.637719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.637873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.637891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.641548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.641676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.641693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.645444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.645699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.645718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.649350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.649607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.649627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.653239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.653411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.653430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.657068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.657179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.657198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.660855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.660944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.660962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.664654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.664742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.664759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.668548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.668710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.668729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.672363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.672529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.672547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.676257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.676498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.676516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.680119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.680365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.680383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.683916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.684100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.684119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.687773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.687874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.687891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.691564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.691663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.691681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.695389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.695496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.695514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.699954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.700106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.700124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.704988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.705104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.705122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.709852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.710092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.916 [2024-10-07 07:48:40.710110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.916 [2024-10-07 07:48:40.714241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.916 [2024-10-07 07:48:40.714428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.714447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.718812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.718946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.718965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.723074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.723178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.723196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.727376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.727440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.727458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.731652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.731753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.731770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.735953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.736105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.736126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.740201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.740309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.740327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.744689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.744883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.744901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.748922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.749080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.749098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.753691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.753839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.753857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.757835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.757989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.758007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.762371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.762456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.762474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.766774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.766870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.766888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.771201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.771356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.771374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.775600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.775738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.775757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.779986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.780216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.780234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.784307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.784474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.784492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.788681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.788829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.788848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.793302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.793444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.793462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.797528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.797626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.797644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.802083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.802162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.802179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.806600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.806778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.806796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.810793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.810938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.810956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.815150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.815356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.815374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.819411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.819547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.819565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.823674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.823800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.823818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.827912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.828009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.828026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.917 [2024-10-07 07:48:40.832435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.917 [2024-10-07 07:48:40.832520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.917 [2024-10-07 07:48:40.832537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.918 [2024-10-07 07:48:40.836801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.918 [2024-10-07 07:48:40.836875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-10-07 07:48:40.836893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.918 [2024-10-07 07:48:40.841413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.918 [2024-10-07 07:48:40.841577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-10-07 07:48:40.841595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.918 [2024-10-07 07:48:40.845962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.918 [2024-10-07 07:48:40.846101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-10-07 07:48:40.846118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.918 [2024-10-07 07:48:40.850247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.918 [2024-10-07 07:48:40.850456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-10-07 07:48:40.850478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.918 [2024-10-07 07:48:40.854854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.918 [2024-10-07 07:48:40.855032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-10-07 07:48:40.855050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.918 [2024-10-07 07:48:40.859155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.918 [2024-10-07 07:48:40.859265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-10-07 07:48:40.859282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.918 [2024-10-07 07:48:40.863520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.918 [2024-10-07 07:48:40.863648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-10-07 07:48:40.863667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.918 [2024-10-07 07:48:40.867806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.918 [2024-10-07 07:48:40.867921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-10-07 07:48:40.867938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.918 [2024-10-07 07:48:40.872140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.918 [2024-10-07 07:48:40.872210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-10-07 07:48:40.872227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.918 [2024-10-07 07:48:40.876492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.918 [2024-10-07 07:48:40.876665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-10-07 07:48:40.876684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.918 [2024-10-07 07:48:40.880911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:36.918 [2024-10-07 07:48:40.881086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.918 [2024-10-07 07:48:40.881105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.179 [2024-10-07 07:48:40.885416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.179 [2024-10-07 07:48:40.885639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.179 [2024-10-07 07:48:40.885658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.179 [2024-10-07 07:48:40.890138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.179 [2024-10-07 07:48:40.890277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.179 [2024-10-07 07:48:40.890296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.179 [2024-10-07 07:48:40.894762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.179 [2024-10-07 07:48:40.894886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.179 [2024-10-07 07:48:40.894903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.179 [2024-10-07 07:48:40.899304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.179 [2024-10-07 07:48:40.899455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.179 [2024-10-07 07:48:40.899474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.179 [2024-10-07 07:48:40.903518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.179 [2024-10-07 07:48:40.903595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.179 [2024-10-07 07:48:40.903614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.907633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.907715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.907734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.912296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.912438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.912458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.916696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.916821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.916838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.920970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.921070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.921088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.925841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.926085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.926103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.930302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.930448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.930466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.934225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.934366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.934384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.938133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.938212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.938231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.941996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.942094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.942113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.945814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.946002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.946021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.949692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.949825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.949844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.953954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.954173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.954192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.958282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.958422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.958441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.962613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.962753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.962775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.966860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.966982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.967001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.971115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.971270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.971290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.975447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.975512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.975531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.979727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.979880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.979899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.983988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.984180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.984198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.988670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.988886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.988904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.993325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.993520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.993549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:40.997497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:40.997596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:40.997615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:41.001928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:41.002063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:41.002083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:41.006484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:41.006593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:41.006610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:41.010288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:41.010365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:41.010384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:41.014192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:41.014358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:41.014377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:41.018233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.180 [2024-10-07 07:48:41.018382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.180 [2024-10-07 07:48:41.018400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.180 [2024-10-07 07:48:41.022368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.022624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.022642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.026137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.026359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.026378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.029918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.030104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.030123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.033701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.033835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.033854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.037448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.037522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.037540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.041229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.041324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.041342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.045041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.045207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.045226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.048917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.049030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.049047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.053099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.053351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.053369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.056986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.057256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.057274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.060790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.060855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.060873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.064935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.065033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.065050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.068921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.069006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.069027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.073158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.073294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.073313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.079136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.079358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.079378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.084345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.084565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.084584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.089433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.089711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.089730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.095535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.095700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.095718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.100764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.100890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.100909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.106054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.106184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.106202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.111022] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.111144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.111163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.116270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.116358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.116376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.121523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.121713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.121732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.126939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.127014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.127032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.132010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.132289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.132307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.137960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.138278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.138297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.181 [2024-10-07 07:48:41.143783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.181 [2024-10-07 07:48:41.143947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.181 [2024-10-07 07:48:41.143966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.442 [2024-10-07 07:48:41.151166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.442 [2024-10-07 07:48:41.151341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.442 [2024-10-07 07:48:41.151359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.442 [2024-10-07 07:48:41.159278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.442 [2024-10-07 07:48:41.159441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.442 [2024-10-07 07:48:41.159461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.442 [2024-10-07 07:48:41.167205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.442 [2024-10-07 07:48:41.167369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.442 [2024-10-07 07:48:41.167387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.442 [2024-10-07 07:48:41.175179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.442 [2024-10-07 07:48:41.175437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.442 [2024-10-07 07:48:41.175455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.442 [2024-10-07 07:48:41.182866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.442 [2024-10-07 07:48:41.183128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.442 [2024-10-07 07:48:41.183145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.442 [2024-10-07 07:48:41.190617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.442 [2024-10-07 07:48:41.190850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.442 [2024-10-07 07:48:41.190868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.442 [2024-10-07 07:48:41.197933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.198121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.198141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.205256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.205495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.205513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.212516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.212691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.212709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.220385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.220590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.220609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.228578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.228765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.228783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.236073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.236316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.236339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.242850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.242979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.242997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.249150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.249300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.249318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.254474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.254606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.254625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.258418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.258550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.258568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.262354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.262440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.262458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.266345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.266481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.266500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.271759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.271921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.271940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.278287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.278484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.278503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.283249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.283442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.283460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.288280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.288505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.288524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.292480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.292580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.292597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.297307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.297501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.297520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.303330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.303478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.303497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.308388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.308560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.308579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.313114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.313256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.313275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.317650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.317808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.317827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.322099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.322301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.322334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.326780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.326968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.326987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.330538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.330643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.330661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.334370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.334508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.334527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.338131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.338230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-10-07 07:48:41.338248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.443 [2024-10-07 07:48:41.342245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.443 [2024-10-07 07:48:41.342396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.342414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.346766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.346830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.346848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.352497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.352656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.352675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.357737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.357914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.357933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.362212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.362338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.362357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.366216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.366296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.366314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.370075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.370210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.370229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.373979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.374054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.374077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.377798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.377927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.377945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.381643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.381732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.381749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.385486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.385648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.385665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.389285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.389472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.389491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.393108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.393360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.393377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.397421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.397547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.397567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.401277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.401417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.401437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.405119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.405202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.405221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.444 [2024-10-07 07:48:41.409025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.444 [2024-10-07 07:48:41.409171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-10-07 07:48:41.409191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.412808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.412888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.412906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.417587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.417754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.417773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.421662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.421866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.421885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.426102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.426367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.426385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.431756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.431910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.431932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.437189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.437296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.437314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.444822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.444964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.444983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.453141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.453267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.453286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.459712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.459780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.459797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.465721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.466140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.466159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.478323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.478577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.478595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.486809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.487076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.487095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.493102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.493278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.493296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.498332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.498503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.498523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.503323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.503511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.503530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.508139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.508293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.508312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.512336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.512448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.512467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.517259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.517484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.517504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.523013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.523193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.523213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.527909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.528112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.528131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.532614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.705 [2024-10-07 07:48:41.532803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.705 [2024-10-07 07:48:41.532822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.705 [2024-10-07 07:48:41.536973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.537153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.537172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.541369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.541514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.541533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.545187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.545285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.545303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.548984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.549064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.549082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.552794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.552940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.552959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.557239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.557389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.557407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.561740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.561996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.562014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.565588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.565793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.565811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.569400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.569600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.569619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.573481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.573604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.573626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.577218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.577304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.577322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.581000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.581110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.581127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.584804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.584951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.584969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.588576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.588780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.588798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.592564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.592816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.592834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.597290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.597477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.597496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.601423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.601561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.601579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.607696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.607806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.607824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.616734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.616969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.616988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.623487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.623586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.623604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.629166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.629296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.629315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.634407] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.634501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.634518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.639692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.639813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.639830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.644705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.644841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.644861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.649779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.649901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.649923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.654867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.655001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.655018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.660012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.660095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-10-07 07:48:41.660117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.706 [2024-10-07 07:48:41.665077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.706 [2024-10-07 07:48:41.665201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.707 [2024-10-07 07:48:41.665222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.707 [2024-10-07 07:48:41.670347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.707 [2024-10-07 07:48:41.670501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.707 [2024-10-07 07:48:41.670520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.967 [2024-10-07 07:48:41.675767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.967 [2024-10-07 07:48:41.675835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.967 [2024-10-07 07:48:41.675852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.967 [2024-10-07 07:48:41.681210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.967 [2024-10-07 07:48:41.681450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.967 [2024-10-07 07:48:41.681469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.967 [2024-10-07 07:48:41.686805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.967 [2024-10-07 07:48:41.686967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.967 [2024-10-07 07:48:41.686985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.967 [2024-10-07 07:48:41.692074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.967 [2024-10-07 07:48:41.692195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.692213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.696482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.696604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.696623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.700429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.700525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.700543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.704412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.704492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.704510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.708285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.708372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.708390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.712203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.712352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.712370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.716046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.716300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.716319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.719850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.720027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.720045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.723632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.723768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.723787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.727439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.727574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.727593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.731160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.731252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.731270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.734909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.735038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.735065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.738763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.738917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.738936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.742986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.743160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.743179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.747994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.748194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.748212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.753228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.753405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.753423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.758191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.758318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.758335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.763525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.763631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.763649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.768590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.768667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.768685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.773946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.774043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.774068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.779322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.779471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.779493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.784552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.784723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.784742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.788902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.789141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.789159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.792862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.968 [2024-10-07 07:48:41.793039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.968 [2024-10-07 07:48:41.793057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.968 [2024-10-07 07:48:41.796857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.797025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.797044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.800800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.800937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.800955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.805362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.805479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.805497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.810807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.810910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.810928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.815312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.815479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.815498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.819595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.819772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.819791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.824281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.824509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.824527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.828573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.828713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.828732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.833100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.833191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.833209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.837845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.838017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.838035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.842338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.842484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.842503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.846726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.846840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.846859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.851153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.851341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.851359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.855554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.855735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.855753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.859966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.860185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.860204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.864260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.864422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.864441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.868592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.868703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.868721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.873464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.873613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.873631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.877701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.877812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.877830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.882467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.882567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.882585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.886965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.887195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.887214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.891232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.891461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.891480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.895516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.895753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.895775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.900090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.900223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.900242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.904025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.904100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.904118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.907808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.907921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.907939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.969 [2024-10-07 07:48:41.911797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.969 [2024-10-07 07:48:41.911925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-10-07 07:48:41.911943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.970 [2024-10-07 07:48:41.915785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.970 [2024-10-07 07:48:41.915928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-10-07 07:48:41.915947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.970 [2024-10-07 07:48:41.919788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.970 [2024-10-07 07:48:41.919992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-10-07 07:48:41.920013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.970 [2024-10-07 07:48:41.923590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.970 [2024-10-07 07:48:41.923807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-10-07 07:48:41.923825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.970 [2024-10-07 07:48:41.927428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.970 [2024-10-07 07:48:41.927631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-10-07 07:48:41.927650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.970 [2024-10-07 07:48:41.931479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.970 [2024-10-07 07:48:41.931633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-10-07 07:48:41.931652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.970 [2024-10-07 07:48:41.935805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:37.970 [2024-10-07 07:48:41.935908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-10-07 07:48:41.935927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.230 [2024-10-07 07:48:41.939827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.230 [2024-10-07 07:48:41.940002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.230 [2024-10-07 07:48:41.940021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.230 [2024-10-07 07:48:41.943637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.230 [2024-10-07 07:48:41.943741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.230 [2024-10-07 07:48:41.943759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.230 [2024-10-07 07:48:41.947389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.230 [2024-10-07 07:48:41.947485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.230 [2024-10-07 07:48:41.947503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.230 [2024-10-07 07:48:41.951637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.230 [2024-10-07 07:48:41.951901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.230 [2024-10-07 07:48:41.951919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.230 [2024-10-07 07:48:41.955840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.230 [2024-10-07 07:48:41.956048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.230 [2024-10-07 07:48:41.956073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.230 [2024-10-07 07:48:41.959713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.230 [2024-10-07 07:48:41.959916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.230 [2024-10-07 07:48:41.959934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.230 [2024-10-07 07:48:41.963784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.230 [2024-10-07 07:48:41.963962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.230 [2024-10-07 07:48:41.963981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.230 [2024-10-07 07:48:41.967641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.230 [2024-10-07 07:48:41.967792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:41.967812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:41.971689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:41.971945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:41.971964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:41.975936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:41.976048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:41.976074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:41.980590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:41.980689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:41.980706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:41.984541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:41.984752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:41.984771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:41.988412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:41.988621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:41.988639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:41.992388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:41.992640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:41.992659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:41.996641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:41.996811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:41.996830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:42.000465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:42.000556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:42.000579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:42.004255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:42.004437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:42.004455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:42.007996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:42.008152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:42.008171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:42.012195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:42.012346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:42.012365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:42.018498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:42.018795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:42.018813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:42.024838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:42.025068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:42.025087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:42.032019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:42.032284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:42.032313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:42.039216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:42.039415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:42.039435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:42.047574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:42.047958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:42.047977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:42.055438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:42.055693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:42.055711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:42.063153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:42.063296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:42.063315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:42.070609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:42.070729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:42.070747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:42.078941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:42.079336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:42.079354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:42.086506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.231 [2024-10-07 07:48:42.086647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.231 [2024-10-07 07:48:42.086665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.231 [2024-10-07 07:48:42.093886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.094197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.094215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.102738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.102880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.102899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.110326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.110433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.110449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.118106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.118448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.118470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.126175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.126525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.126544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.138109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.138222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.138239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.145524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.145808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.145826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.151381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.151575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.151593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.156693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.156891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.156910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.161072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.161258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.161276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.165587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.165684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.165703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.169945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.170124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.170143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.175483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.175632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.175651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.179983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.180088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.180106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.183927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.184111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.184130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.187862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.188073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.188091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.191963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.192175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.192194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.232 [2024-10-07 07:48:42.196565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.232 [2024-10-07 07:48:42.196763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.232 [2024-10-07 07:48:42.196782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.493 [2024-10-07 07:48:42.200426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.493 [2024-10-07 07:48:42.200514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.493 [2024-10-07 07:48:42.200531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.493 [2024-10-07 07:48:42.204312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.204486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.204504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.212166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.212413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.212431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.220710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.220824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.220842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.227194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.227437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.227455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.231587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.231781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.231800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.236012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.236248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.236266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.239968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.240138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.240158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.244277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.244354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.244372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.248165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.248342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.248360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.252601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.252749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.252766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.258544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.258734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.258755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.264399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.264794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.264813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.271938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.272171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.272190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.278141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.278392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.278411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.282954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.283126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.283144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.286928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.287038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.287055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.290869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.291040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.291063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.294760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.294901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.294919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.298670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.298760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.298778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.302595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.302814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.302832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.306690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.306866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.494 [2024-10-07 07:48:42.306883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.494 [2024-10-07 07:48:42.311276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.494 [2024-10-07 07:48:42.311487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.311511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.495 [2024-10-07 07:48:42.315615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.495 [2024-10-07 07:48:42.315737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.315754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.495 [2024-10-07 07:48:42.320086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.495 [2024-10-07 07:48:42.320167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.320184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.495 [2024-10-07 07:48:42.324416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.495 [2024-10-07 07:48:42.324579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.324596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.495 [2024-10-07 07:48:42.328761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.495 [2024-10-07 07:48:42.328914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.328931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.495 [2024-10-07 07:48:42.333144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.495 [2024-10-07 07:48:42.333213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.333231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.495 [2024-10-07 07:48:42.337670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.495 [2024-10-07 07:48:42.337867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.337884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.495 [2024-10-07 07:48:42.342449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.495 [2024-10-07 07:48:42.342612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.342630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.495 [2024-10-07 07:48:42.346754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.495 [2024-10-07 07:48:42.346941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.346960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.495 [2024-10-07 07:48:42.351386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.495 [2024-10-07 07:48:42.351479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.351497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.495 [2024-10-07 07:48:42.355638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.495 [2024-10-07 07:48:42.355717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.355734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.495 [2024-10-07 07:48:42.360198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.495 [2024-10-07 07:48:42.360322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.360340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.495 [2024-10-07 07:48:42.364388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.495 [2024-10-07 07:48:42.364502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.364521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.495 [2024-10-07 07:48:42.368605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.495 [2024-10-07 07:48:42.368698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.368715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.495 [2024-10-07 07:48:42.373165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.495 [2024-10-07 07:48:42.373385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.373413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.495 [2024-10-07 07:48:42.378160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x179a4b0) with pdu=0x2000190fef90 00:29:38.495 [2024-10-07 07:48:42.378223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.495 [2024-10-07 07:48:42.378246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.495 00:29:38.495 Latency(us) 00:29:38.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.495 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:38.495 nvme0n1 : 2.00 6424.01 803.00 0.00 0.00 2485.93 1607.19 13356.86 00:29:38.495 =================================================================================================================== 00:29:38.495 Total : 6424.01 803.00 0.00 0.00 2485.93 1607.19 13356.86 00:29:38.495 0 00:29:38.495 07:48:42 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:38.495 07:48:42 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:38.495 07:48:42 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:38.495 | .driver_specific 00:29:38.495 | .nvme_error 00:29:38.495 | .status_code 00:29:38.495 | .command_transient_transport_error' 00:29:38.495 07:48:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:38.755 07:48:42 -- host/digest.sh@71 -- # (( 415 > 0 )) 00:29:38.755 07:48:42 -- host/digest.sh@73 -- # killprocess 95854 00:29:38.755 07:48:42 -- common/autotest_common.sh@926 -- # '[' -z 95854 ']' 00:29:38.755 07:48:42 -- common/autotest_common.sh@930 -- # kill -0 95854 00:29:38.755 07:48:42 -- common/autotest_common.sh@931 -- # uname 00:29:38.755 07:48:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:38.755 07:48:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95854 00:29:38.755 07:48:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:38.755 07:48:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:38.755 07:48:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95854' 00:29:38.755 killing process with pid 95854 00:29:38.755 07:48:42 -- common/autotest_common.sh@945 -- # kill 95854 00:29:38.755 Received shutdown signal, test time was about 2.000000 seconds 00:29:38.755 00:29:38.755 Latency(us) 00:29:38.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.755 =================================================================================================================== 00:29:38.755 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:38.755 07:48:42 -- common/autotest_common.sh@950 -- # wait 95854 00:29:39.015 07:48:42 -- host/digest.sh@115 -- # killprocess 93877 00:29:39.015 07:48:42 -- common/autotest_common.sh@926 -- # '[' -z 93877 ']' 00:29:39.015 07:48:42 -- common/autotest_common.sh@930 -- # kill -0 93877 00:29:39.015 07:48:42 -- common/autotest_common.sh@931 -- # uname 00:29:39.015 07:48:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:39.015 07:48:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93877 00:29:39.015 07:48:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:39.015 07:48:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:39.015 07:48:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93877' 00:29:39.015 killing process with pid 93877 00:29:39.015 07:48:42 -- common/autotest_common.sh@945 -- # kill 93877 00:29:39.015 07:48:42 -- common/autotest_common.sh@950 -- # wait 93877 00:29:39.274 00:29:39.274 real 0m16.502s 00:29:39.274 user 0m31.938s 00:29:39.274 sys 0m4.651s 00:29:39.274 07:48:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:39.274 07:48:43 -- common/autotest_common.sh@10 -- # set +x 00:29:39.274 ************************************ 00:29:39.274 END TEST nvmf_digest_error 00:29:39.274 ************************************ 00:29:39.274 07:48:43 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:29:39.274 07:48:43 -- host/digest.sh@139 -- # nvmftestfini 00:29:39.274 07:48:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:39.274 07:48:43 -- nvmf/common.sh@116 -- # sync 00:29:39.274 07:48:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:39.274 07:48:43 -- nvmf/common.sh@119 -- # set +e 00:29:39.274 07:48:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:39.274 07:48:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:39.274 rmmod nvme_tcp 00:29:39.274 rmmod nvme_fabrics 00:29:39.274 rmmod nvme_keyring 00:29:39.274 07:48:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:39.274 07:48:43 -- nvmf/common.sh@123 -- # set -e 00:29:39.274 07:48:43 -- nvmf/common.sh@124 -- # return 0 00:29:39.274 07:48:43 -- nvmf/common.sh@477 -- # '[' -n 93877 ']' 00:29:39.274 07:48:43 -- nvmf/common.sh@478 -- # killprocess 93877 00:29:39.274 07:48:43 -- common/autotest_common.sh@926 -- # '[' -z 93877 ']' 00:29:39.274 07:48:43 -- common/autotest_common.sh@930 -- # kill -0 93877 00:29:39.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (93877) - No such process 00:29:39.275 07:48:43 -- common/autotest_common.sh@953 -- # echo 'Process with pid 93877 is not found' 00:29:39.275 Process with pid 93877 is not found 00:29:39.275 07:48:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:39.275 07:48:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:39.275 07:48:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:39.275 07:48:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:39.275 07:48:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:39.275 07:48:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.275 07:48:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:39.275 07:48:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.813 07:48:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:41.813 00:29:41.813 real 0m41.351s 00:29:41.813 user 1m6.126s 00:29:41.813 sys 0m13.492s 00:29:41.813 07:48:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:41.813 07:48:45 -- common/autotest_common.sh@10 -- # set +x 00:29:41.813 ************************************ 00:29:41.813 END TEST nvmf_digest 00:29:41.813 ************************************ 00:29:41.813 07:48:45 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:29:41.813 07:48:45 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:29:41.813 07:48:45 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:29:41.813 07:48:45 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:41.813 07:48:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:41.813 07:48:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:41.813 07:48:45 -- common/autotest_common.sh@10 -- # set +x 00:29:41.813 ************************************ 00:29:41.813 START TEST nvmf_bdevperf 00:29:41.813 ************************************ 00:29:41.813 07:48:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:41.813 * Looking for test storage... 00:29:41.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:41.813 07:48:45 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.813 07:48:45 -- nvmf/common.sh@7 -- # uname -s 00:29:41.813 07:48:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.813 07:48:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.813 07:48:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.813 07:48:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.813 07:48:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.813 07:48:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.813 07:48:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.813 07:48:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.813 07:48:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.813 07:48:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.813 07:48:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.813 07:48:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.813 07:48:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.813 07:48:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.813 07:48:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.813 07:48:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.813 07:48:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.813 07:48:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.813 07:48:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.813 07:48:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.814 07:48:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.814 07:48:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.814 07:48:45 -- paths/export.sh@5 -- # export PATH 00:29:41.814 07:48:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.814 07:48:45 -- nvmf/common.sh@46 -- # : 0 00:29:41.814 07:48:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:41.814 07:48:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:41.814 07:48:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:41.814 07:48:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.814 07:48:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.814 07:48:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:41.814 07:48:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:41.814 07:48:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:41.814 07:48:45 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:41.814 07:48:45 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:41.814 07:48:45 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:41.814 07:48:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:41.814 07:48:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.814 07:48:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:41.814 07:48:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:41.814 07:48:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:41.814 07:48:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.814 07:48:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:41.814 07:48:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.814 07:48:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:41.814 07:48:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:41.814 07:48:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:41.814 07:48:45 -- common/autotest_common.sh@10 -- # set +x 00:29:47.092 07:48:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:47.092 07:48:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:47.092 07:48:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:47.092 07:48:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:47.092 07:48:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:47.092 07:48:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:47.092 07:48:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:47.092 07:48:50 -- nvmf/common.sh@294 -- # net_devs=() 00:29:47.092 07:48:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:47.092 07:48:50 -- nvmf/common.sh@295 -- # e810=() 00:29:47.092 07:48:50 -- nvmf/common.sh@295 -- # local -ga e810 00:29:47.092 07:48:50 -- nvmf/common.sh@296 -- # x722=() 00:29:47.092 07:48:50 -- nvmf/common.sh@296 -- # local -ga x722 00:29:47.092 07:48:50 -- nvmf/common.sh@297 -- # mlx=() 00:29:47.092 07:48:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:47.092 07:48:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.092 07:48:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.092 07:48:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.092 07:48:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.092 07:48:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.092 07:48:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.092 07:48:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.092 07:48:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.092 07:48:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.092 07:48:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.092 07:48:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.092 07:48:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:47.092 07:48:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:47.092 07:48:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:47.092 07:48:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:47.092 07:48:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:47.092 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:47.092 07:48:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:47.092 07:48:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:47.092 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:47.092 07:48:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:47.092 07:48:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:47.092 07:48:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.092 07:48:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:47.092 07:48:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.092 07:48:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:47.092 Found net devices under 0000:af:00.0: cvl_0_0 00:29:47.092 07:48:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.092 07:48:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:47.092 07:48:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.092 07:48:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:47.092 07:48:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.092 07:48:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:47.092 Found net devices under 0000:af:00.1: cvl_0_1 00:29:47.092 07:48:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.092 07:48:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:47.092 07:48:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:47.092 07:48:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:47.092 07:48:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:47.092 07:48:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.092 07:48:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.092 07:48:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.092 07:48:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:47.092 07:48:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.092 07:48:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.092 07:48:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:47.092 07:48:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.092 07:48:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.092 07:48:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:47.092 07:48:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:47.092 07:48:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.092 07:48:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.092 07:48:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.092 07:48:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.092 07:48:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:47.092 07:48:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.092 07:48:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.093 07:48:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.093 07:48:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:47.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:29:47.093 00:29:47.093 --- 10.0.0.2 ping statistics --- 00:29:47.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.093 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:29:47.093 07:48:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:29:47.093 00:29:47.093 --- 10.0.0.1 ping statistics --- 00:29:47.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.093 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:29:47.093 07:48:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.093 07:48:50 -- nvmf/common.sh@410 -- # return 0 00:29:47.093 07:48:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:47.093 07:48:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.093 07:48:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:47.093 07:48:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:47.093 07:48:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.093 07:48:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:47.093 07:48:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:47.093 07:48:50 -- host/bdevperf.sh@25 -- # tgt_init 00:29:47.093 07:48:50 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:47.093 07:48:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:47.093 07:48:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:47.093 07:48:50 -- common/autotest_common.sh@10 -- # set +x 00:29:47.093 07:48:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:47.093 07:48:50 -- nvmf/common.sh@469 -- # nvmfpid=99947 00:29:47.093 07:48:50 -- nvmf/common.sh@470 -- # waitforlisten 99947 00:29:47.093 07:48:50 -- common/autotest_common.sh@819 -- # '[' -z 99947 ']' 00:29:47.093 07:48:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.093 07:48:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:47.093 07:48:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.093 07:48:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:47.093 07:48:50 -- common/autotest_common.sh@10 -- # set +x 00:29:47.093 [2024-10-07 07:48:50.947161] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:47.093 [2024-10-07 07:48:50.947206] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.093 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.093 [2024-10-07 07:48:51.005978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:47.353 [2024-10-07 07:48:51.088166] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:47.353 [2024-10-07 07:48:51.088290] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.353 [2024-10-07 07:48:51.088299] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.353 [2024-10-07 07:48:51.088305] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.353 [2024-10-07 07:48:51.088422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.353 [2024-10-07 07:48:51.088507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.353 [2024-10-07 07:48:51.088509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.921 07:48:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:47.921 07:48:51 -- common/autotest_common.sh@852 -- # return 0 00:29:47.921 07:48:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:47.921 07:48:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:47.921 07:48:51 -- common/autotest_common.sh@10 -- # set +x 00:29:47.921 07:48:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.921 07:48:51 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:47.922 07:48:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.922 07:48:51 -- common/autotest_common.sh@10 -- # set +x 00:29:47.922 [2024-10-07 07:48:51.812679] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.922 07:48:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.922 07:48:51 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:47.922 07:48:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.922 07:48:51 -- common/autotest_common.sh@10 -- # set +x 00:29:47.922 Malloc0 00:29:47.922 07:48:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.922 07:48:51 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:47.922 07:48:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.922 07:48:51 -- common/autotest_common.sh@10 -- # set +x 00:29:47.922 07:48:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.922 07:48:51 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:47.922 07:48:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.922 07:48:51 -- common/autotest_common.sh@10 -- # set +x 00:29:47.922 07:48:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.922 07:48:51 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.922 07:48:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.922 07:48:51 -- common/autotest_common.sh@10 -- # set +x 00:29:47.922 [2024-10-07 07:48:51.878044] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.922 07:48:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.922 07:48:51 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:47.922 07:48:51 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:47.922 07:48:51 -- nvmf/common.sh@520 -- # config=() 00:29:47.922 07:48:51 -- nvmf/common.sh@520 -- # local subsystem config 00:29:47.922 07:48:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:47.922 07:48:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:47.922 { 00:29:47.922 "params": { 00:29:47.922 "name": "Nvme$subsystem", 00:29:47.922 "trtype": "$TEST_TRANSPORT", 00:29:47.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:47.922 "adrfam": "ipv4", 00:29:47.922 "trsvcid": "$NVMF_PORT", 00:29:47.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:47.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:47.922 "hdgst": ${hdgst:-false}, 00:29:47.922 "ddgst": ${ddgst:-false} 00:29:47.922 }, 00:29:47.922 "method": "bdev_nvme_attach_controller" 00:29:47.922 } 00:29:47.922 EOF 00:29:47.922 )") 00:29:47.922 07:48:51 -- nvmf/common.sh@542 -- # cat 00:29:48.181 07:48:51 -- nvmf/common.sh@544 -- # jq . 00:29:48.181 07:48:51 -- nvmf/common.sh@545 -- # IFS=, 00:29:48.181 07:48:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:48.181 "params": { 00:29:48.181 "name": "Nvme1", 00:29:48.181 "trtype": "tcp", 00:29:48.181 "traddr": "10.0.0.2", 00:29:48.181 "adrfam": "ipv4", 00:29:48.181 "trsvcid": "4420", 00:29:48.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:48.181 "hdgst": false, 00:29:48.181 "ddgst": false 00:29:48.181 }, 00:29:48.181 "method": "bdev_nvme_attach_controller" 00:29:48.181 }' 00:29:48.181 [2024-10-07 07:48:51.923662] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:48.181 [2024-10-07 07:48:51.923704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100187 ] 00:29:48.181 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.181 [2024-10-07 07:48:51.978914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.181 [2024-10-07 07:48:52.054514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.440 Running I/O for 1 seconds... 00:29:49.378 00:29:49.378 Latency(us) 00:29:49.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.378 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:49.378 Verification LBA range: start 0x0 length 0x4000 00:29:49.378 Nvme1n1 : 1.00 17425.83 68.07 0.00 0.00 7318.02 862.11 15166.90 00:29:49.378 =================================================================================================================== 00:29:49.378 Total : 17425.83 68.07 0.00 0.00 7318.02 862.11 15166.90 00:29:49.638 07:48:53 -- host/bdevperf.sh@30 -- # bdevperfpid=100426 00:29:49.638 07:48:53 -- host/bdevperf.sh@32 -- # sleep 3 00:29:49.638 07:48:53 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:49.638 07:48:53 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:49.638 07:48:53 -- nvmf/common.sh@520 -- # config=() 00:29:49.638 07:48:53 -- nvmf/common.sh@520 -- # local subsystem config 00:29:49.638 07:48:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:49.638 07:48:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:49.638 { 00:29:49.638 "params": { 00:29:49.638 "name": "Nvme$subsystem", 00:29:49.638 "trtype": "$TEST_TRANSPORT", 00:29:49.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.638 "adrfam": "ipv4", 00:29:49.638 "trsvcid": "$NVMF_PORT", 00:29:49.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.638 "hdgst": ${hdgst:-false}, 00:29:49.638 "ddgst": ${ddgst:-false} 00:29:49.638 }, 00:29:49.638 "method": "bdev_nvme_attach_controller" 00:29:49.638 } 00:29:49.638 EOF 00:29:49.638 )") 00:29:49.638 07:48:53 -- nvmf/common.sh@542 -- # cat 00:29:49.638 07:48:53 -- nvmf/common.sh@544 -- # jq . 00:29:49.638 07:48:53 -- nvmf/common.sh@545 -- # IFS=, 00:29:49.638 07:48:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:49.638 "params": { 00:29:49.638 "name": "Nvme1", 00:29:49.638 "trtype": "tcp", 00:29:49.638 "traddr": "10.0.0.2", 00:29:49.638 "adrfam": "ipv4", 00:29:49.638 "trsvcid": "4420", 00:29:49.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:49.638 "hdgst": false, 00:29:49.638 "ddgst": false 00:29:49.638 }, 00:29:49.638 "method": "bdev_nvme_attach_controller" 00:29:49.638 }' 00:29:49.638 [2024-10-07 07:48:53.495544] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:49.638 [2024-10-07 07:48:53.495592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100426 ] 00:29:49.638 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.638 [2024-10-07 07:48:53.551263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.898 [2024-10-07 07:48:53.619500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.898 Running I/O for 15 seconds... 00:29:53.197 07:48:56 -- host/bdevperf.sh@33 -- # kill -9 99947 00:29:53.197 07:48:56 -- host/bdevperf.sh@35 -- # sleep 3 00:29:53.197 [2024-10-07 07:48:56.474332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.197 [2024-10-07 07:48:56.474705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.197 [2024-10-07 07:48:56.474714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.474722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.474741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.474758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.474775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.474791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.474806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.474821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.474838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.474855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.474871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.198 [2024-10-07 07:48:56.474887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.198 [2024-10-07 07:48:56.474903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.198 [2024-10-07 07:48:56.474918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.474935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.474955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.474978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.474989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.474997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.198 [2024-10-07 07:48:56.475183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.198 [2024-10-07 07:48:56.475197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.198 [2024-10-07 07:48:56.475212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.198 [2024-10-07 07:48:56.475226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.198 [2024-10-07 07:48:56.475271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.198 [2024-10-07 07:48:56.475287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.198 [2024-10-07 07:48:56.475389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.198 [2024-10-07 07:48:56.475397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.199 [2024-10-07 07:48:56.475432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.199 [2024-10-07 07:48:56.475446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.199 [2024-10-07 07:48:56.475460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.199 [2024-10-07 07:48:56.475492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.199 [2024-10-07 07:48:56.475521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.199 [2024-10-07 07:48:56.475536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.199 [2024-10-07 07:48:56.475551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.199 [2024-10-07 07:48:56.475565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.199 [2024-10-07 07:48:56.475622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.199 [2024-10-07 07:48:56.475772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.199 [2024-10-07 07:48:56.475801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.199 [2024-10-07 07:48:56.475816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.199 [2024-10-07 07:48:56.475831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.199 [2024-10-07 07:48:56.475847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.199 [2024-10-07 07:48:56.475968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.199 [2024-10-07 07:48:56.475976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.475983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.475991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.475997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.200 [2024-10-07 07:48:56.476012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.200 [2024-10-07 07:48:56.476028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.200 [2024-10-07 07:48:56.476042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.200 [2024-10-07 07:48:56.476057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.200 [2024-10-07 07:48:56.476090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.200 [2024-10-07 07:48:56.476133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.200 [2024-10-07 07:48:56.476163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.200 [2024-10-07 07:48:56.476192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.200 [2024-10-07 07:48:56.476223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.200 [2024-10-07 07:48:56.476238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.200 [2024-10-07 07:48:56.476382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbba7c0 is same with the state(5) to be set 00:29:53.200 [2024-10-07 07:48:56.476398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.200 [2024-10-07 07:48:56.476404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.200 [2024-10-07 07:48:56.476411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107880 len:8 PRP1 0x0 PRP2 0x0 00:29:53.200 [2024-10-07 07:48:56.476425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.200 [2024-10-07 07:48:56.476467] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbba7c0 was disconnected and freed. reset controller. 00:29:53.200 [2024-10-07 07:48:56.478513] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.200 [2024-10-07 07:48:56.478561] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.200 [2024-10-07 07:48:56.479080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.200 [2024-10-07 07:48:56.479353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.200 [2024-10-07 07:48:56.479363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.200 [2024-10-07 07:48:56.479371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.200 [2024-10-07 07:48:56.479487] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.200 [2024-10-07 07:48:56.479615] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.200 [2024-10-07 07:48:56.479623] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.200 [2024-10-07 07:48:56.479631] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.200 [2024-10-07 07:48:56.481432] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.200 [2024-10-07 07:48:56.490650] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.200 [2024-10-07 07:48:56.491121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.200 [2024-10-07 07:48:56.491385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.200 [2024-10-07 07:48:56.491417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.200 [2024-10-07 07:48:56.491442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.200 [2024-10-07 07:48:56.491702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.200 [2024-10-07 07:48:56.491831] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.200 [2024-10-07 07:48:56.491839] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.200 [2024-10-07 07:48:56.491846] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.200 [2024-10-07 07:48:56.493537] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.200 [2024-10-07 07:48:56.502385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.200 [2024-10-07 07:48:56.502867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.200 [2024-10-07 07:48:56.503092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.201 [2024-10-07 07:48:56.503104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.201 [2024-10-07 07:48:56.503112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.201 [2024-10-07 07:48:56.503210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.201 [2024-10-07 07:48:56.503324] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.201 [2024-10-07 07:48:56.503333] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.201 [2024-10-07 07:48:56.503339] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.201 [2024-10-07 07:48:56.505040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.201 [2024-10-07 07:48:56.513968] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.201 [2024-10-07 07:48:56.514389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.201 [2024-10-07 07:48:56.514597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.201 [2024-10-07 07:48:56.514607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.201 [2024-10-07 07:48:56.514615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.201 [2024-10-07 07:48:56.514726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.201 [2024-10-07 07:48:56.514851] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.201 [2024-10-07 07:48:56.514859] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.201 [2024-10-07 07:48:56.514865] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.201 [2024-10-07 07:48:56.516599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.201 [2024-10-07 07:48:56.525734] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.201 [2024-10-07 07:48:56.526102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.201 [2024-10-07 07:48:56.526390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.201 [2024-10-07 07:48:56.526400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.201 [2024-10-07 07:48:56.526407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.201 [2024-10-07 07:48:56.526525] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.201 [2024-10-07 07:48:56.526643] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.201 [2024-10-07 07:48:56.526651] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.201 [2024-10-07 07:48:56.526657] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.201 [2024-10-07 07:48:56.528296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.201 [2024-10-07 07:48:56.537561] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.201 [2024-10-07 07:48:56.537849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.201 [2024-10-07 07:48:56.538066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.201 [2024-10-07 07:48:56.538076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.201 [2024-10-07 07:48:56.538099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.201 [2024-10-07 07:48:56.538240] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.201 [2024-10-07 07:48:56.538322] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.201 [2024-10-07 07:48:56.538333] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.201 [2024-10-07 07:48:56.538339] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.201 [2024-10-07 07:48:56.540129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.201 [2024-10-07 07:48:56.549308] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.201 [2024-10-07 07:48:56.549762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.201 [2024-10-07 07:48:56.550021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.201 [2024-10-07 07:48:56.550052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.201 [2024-10-07 07:48:56.550091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.201 [2024-10-07 07:48:56.550424] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.201 [2024-10-07 07:48:56.550698] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.201 [2024-10-07 07:48:56.550706] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.201 [2024-10-07 07:48:56.550713] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.201 [2024-10-07 07:48:56.552456] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.201 [2024-10-07 07:48:56.561287] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.201 [2024-10-07 07:48:56.561691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.201 [2024-10-07 07:48:56.561874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.201 [2024-10-07 07:48:56.561883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.201 [2024-10-07 07:48:56.561890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.201 [2024-10-07 07:48:56.561995] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.201 [2024-10-07 07:48:56.562136] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.201 [2024-10-07 07:48:56.562145] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.201 [2024-10-07 07:48:56.562151] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.201 [2024-10-07 07:48:56.563924] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.201 [2024-10-07 07:48:56.573047] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.201 [2024-10-07 07:48:56.573408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.201 [2024-10-07 07:48:56.573661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.201 [2024-10-07 07:48:56.573672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.201 [2024-10-07 07:48:56.573679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.201 [2024-10-07 07:48:56.573838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.201 [2024-10-07 07:48:56.573995] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.201 [2024-10-07 07:48:56.574003] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.201 [2024-10-07 07:48:56.574014] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.201 [2024-10-07 07:48:56.575868] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.201 [2024-10-07 07:48:56.584955] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.201 [2024-10-07 07:48:56.585323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.201 [2024-10-07 07:48:56.585409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.201 [2024-10-07 07:48:56.585418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.201 [2024-10-07 07:48:56.585425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.201 [2024-10-07 07:48:56.585553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.201 [2024-10-07 07:48:56.585652] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.201 [2024-10-07 07:48:56.585660] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.201 [2024-10-07 07:48:56.585666] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.201 [2024-10-07 07:48:56.587439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.201 [2024-10-07 07:48:56.596787] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.201 [2024-10-07 07:48:56.597189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.597463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.597473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.202 [2024-10-07 07:48:56.597480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.202 [2024-10-07 07:48:56.597577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.202 [2024-10-07 07:48:56.597687] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.202 [2024-10-07 07:48:56.597694] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.202 [2024-10-07 07:48:56.597701] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.202 [2024-10-07 07:48:56.599479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.202 [2024-10-07 07:48:56.608527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.202 [2024-10-07 07:48:56.608928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.609203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.609237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.202 [2024-10-07 07:48:56.609261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.202 [2024-10-07 07:48:56.609593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.202 [2024-10-07 07:48:56.609966] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.202 [2024-10-07 07:48:56.609974] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.202 [2024-10-07 07:48:56.609980] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.202 [2024-10-07 07:48:56.611599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.202 [2024-10-07 07:48:56.620427] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.202 [2024-10-07 07:48:56.620811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.621085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.621096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.202 [2024-10-07 07:48:56.621103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.202 [2024-10-07 07:48:56.621245] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.202 [2024-10-07 07:48:56.621398] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.202 [2024-10-07 07:48:56.621406] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.202 [2024-10-07 07:48:56.621412] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.202 [2024-10-07 07:48:56.623068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.202 [2024-10-07 07:48:56.632254] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.202 [2024-10-07 07:48:56.632679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.632889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.632899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.202 [2024-10-07 07:48:56.632906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.202 [2024-10-07 07:48:56.632988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.202 [2024-10-07 07:48:56.633105] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.202 [2024-10-07 07:48:56.633113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.202 [2024-10-07 07:48:56.633119] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.202 [2024-10-07 07:48:56.634773] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.202 [2024-10-07 07:48:56.643954] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.202 [2024-10-07 07:48:56.644359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.644624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.644634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.202 [2024-10-07 07:48:56.644641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.202 [2024-10-07 07:48:56.644737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.202 [2024-10-07 07:48:56.644833] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.202 [2024-10-07 07:48:56.644840] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.202 [2024-10-07 07:48:56.644846] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.202 [2024-10-07 07:48:56.646495] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.202 [2024-10-07 07:48:56.655760] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.202 [2024-10-07 07:48:56.656161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.656388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.656398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.202 [2024-10-07 07:48:56.656405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.202 [2024-10-07 07:48:56.656558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.202 [2024-10-07 07:48:56.656697] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.202 [2024-10-07 07:48:56.656705] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.202 [2024-10-07 07:48:56.656711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.202 [2024-10-07 07:48:56.658457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.202 [2024-10-07 07:48:56.667487] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.202 [2024-10-07 07:48:56.667877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.668152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.668163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.202 [2024-10-07 07:48:56.668170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.202 [2024-10-07 07:48:56.668238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.202 [2024-10-07 07:48:56.668334] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.202 [2024-10-07 07:48:56.668341] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.202 [2024-10-07 07:48:56.668348] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.202 [2024-10-07 07:48:56.669986] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.202 [2024-10-07 07:48:56.679242] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.202 [2024-10-07 07:48:56.679655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.679927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.679937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.202 [2024-10-07 07:48:56.679944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.202 [2024-10-07 07:48:56.680054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.202 [2024-10-07 07:48:56.680173] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.202 [2024-10-07 07:48:56.680181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.202 [2024-10-07 07:48:56.680188] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.202 [2024-10-07 07:48:56.681934] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.202 [2024-10-07 07:48:56.691037] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.202 [2024-10-07 07:48:56.691478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.691755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.691787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.202 [2024-10-07 07:48:56.691811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.202 [2024-10-07 07:48:56.692152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.202 [2024-10-07 07:48:56.692264] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.202 [2024-10-07 07:48:56.692272] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.202 [2024-10-07 07:48:56.692278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.202 [2024-10-07 07:48:56.693951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.202 [2024-10-07 07:48:56.703005] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.202 [2024-10-07 07:48:56.703397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.703673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.202 [2024-10-07 07:48:56.703683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.202 [2024-10-07 07:48:56.703690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.202 [2024-10-07 07:48:56.703801] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.203 [2024-10-07 07:48:56.703911] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.203 [2024-10-07 07:48:56.703919] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.203 [2024-10-07 07:48:56.703926] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.203 [2024-10-07 07:48:56.705639] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.203 [2024-10-07 07:48:56.714876] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.203 [2024-10-07 07:48:56.715276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.715542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.715552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.203 [2024-10-07 07:48:56.715559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.203 [2024-10-07 07:48:56.715656] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.203 [2024-10-07 07:48:56.715751] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.203 [2024-10-07 07:48:56.715758] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.203 [2024-10-07 07:48:56.715764] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.203 [2024-10-07 07:48:56.717535] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.203 [2024-10-07 07:48:56.726597] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.203 [2024-10-07 07:48:56.727080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.727449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.727490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.203 [2024-10-07 07:48:56.727514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.203 [2024-10-07 07:48:56.727650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.203 [2024-10-07 07:48:56.727817] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.203 [2024-10-07 07:48:56.727831] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.203 [2024-10-07 07:48:56.727842] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.203 [2024-10-07 07:48:56.730692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.203 [2024-10-07 07:48:56.739106] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.203 [2024-10-07 07:48:56.739540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.739738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.739749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.203 [2024-10-07 07:48:56.739757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.203 [2024-10-07 07:48:56.739867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.203 [2024-10-07 07:48:56.739976] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.203 [2024-10-07 07:48:56.739985] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.203 [2024-10-07 07:48:56.739993] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.203 [2024-10-07 07:48:56.742000] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.203 [2024-10-07 07:48:56.751249] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.203 [2024-10-07 07:48:56.751697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.751959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.751991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.203 [2024-10-07 07:48:56.752015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.203 [2024-10-07 07:48:56.752336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.203 [2024-10-07 07:48:56.752438] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.203 [2024-10-07 07:48:56.752447] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.203 [2024-10-07 07:48:56.752454] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.203 [2024-10-07 07:48:56.754195] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.203 [2024-10-07 07:48:56.763306] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.203 [2024-10-07 07:48:56.763749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.764083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.764117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.203 [2024-10-07 07:48:56.764150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.203 [2024-10-07 07:48:56.764438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.203 [2024-10-07 07:48:56.764538] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.203 [2024-10-07 07:48:56.764546] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.203 [2024-10-07 07:48:56.764553] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.203 [2024-10-07 07:48:56.766348] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.203 [2024-10-07 07:48:56.775174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.203 [2024-10-07 07:48:56.775618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.775942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.775974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.203 [2024-10-07 07:48:56.775997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.203 [2024-10-07 07:48:56.776341] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.203 [2024-10-07 07:48:56.776602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.203 [2024-10-07 07:48:56.776610] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.203 [2024-10-07 07:48:56.776616] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.203 [2024-10-07 07:48:56.778345] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.203 [2024-10-07 07:48:56.787072] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.203 [2024-10-07 07:48:56.787512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.787717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.787728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.203 [2024-10-07 07:48:56.787735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.203 [2024-10-07 07:48:56.787832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.203 [2024-10-07 07:48:56.787965] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.203 [2024-10-07 07:48:56.787974] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.203 [2024-10-07 07:48:56.787981] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.203 [2024-10-07 07:48:56.789574] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.203 [2024-10-07 07:48:56.798864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.203 [2024-10-07 07:48:56.799274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.799535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.799566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.203 [2024-10-07 07:48:56.799589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.203 [2024-10-07 07:48:56.799835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.203 [2024-10-07 07:48:56.800245] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.203 [2024-10-07 07:48:56.800272] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.203 [2024-10-07 07:48:56.800293] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.203 [2024-10-07 07:48:56.802068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.203 [2024-10-07 07:48:56.810653] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.203 [2024-10-07 07:48:56.811073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.811349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.203 [2024-10-07 07:48:56.811358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.203 [2024-10-07 07:48:56.811383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.203 [2024-10-07 07:48:56.811867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.203 [2024-10-07 07:48:56.812362] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.203 [2024-10-07 07:48:56.812402] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.203 [2024-10-07 07:48:56.812423] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.203 [2024-10-07 07:48:56.814273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.203 [2024-10-07 07:48:56.822584] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.204 [2024-10-07 07:48:56.822936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.823139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.823150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.204 [2024-10-07 07:48:56.823186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.204 [2024-10-07 07:48:56.823519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.204 [2024-10-07 07:48:56.823788] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.204 [2024-10-07 07:48:56.823796] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.204 [2024-10-07 07:48:56.823803] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.204 [2024-10-07 07:48:56.825519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.204 [2024-10-07 07:48:56.834530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.204 [2024-10-07 07:48:56.834905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.835185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.835195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.204 [2024-10-07 07:48:56.835202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.204 [2024-10-07 07:48:56.835348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.204 [2024-10-07 07:48:56.835456] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.204 [2024-10-07 07:48:56.835464] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.204 [2024-10-07 07:48:56.835470] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.204 [2024-10-07 07:48:56.837025] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.204 [2024-10-07 07:48:56.846357] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.204 [2024-10-07 07:48:56.846780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.847141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.847175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.204 [2024-10-07 07:48:56.847199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.204 [2024-10-07 07:48:56.847530] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.204 [2024-10-07 07:48:56.847869] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.204 [2024-10-07 07:48:56.847877] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.204 [2024-10-07 07:48:56.847884] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.204 [2024-10-07 07:48:56.849459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.204 [2024-10-07 07:48:56.858186] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.204 [2024-10-07 07:48:56.858563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.858838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.858847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.204 [2024-10-07 07:48:56.858854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.204 [2024-10-07 07:48:56.858972] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.204 [2024-10-07 07:48:56.859112] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.204 [2024-10-07 07:48:56.859121] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.204 [2024-10-07 07:48:56.859127] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.204 [2024-10-07 07:48:56.860952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.204 [2024-10-07 07:48:56.870015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.204 [2024-10-07 07:48:56.870435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.870725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.870757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.204 [2024-10-07 07:48:56.870781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.204 [2024-10-07 07:48:56.871016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.204 [2024-10-07 07:48:56.871117] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.204 [2024-10-07 07:48:56.871129] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.204 [2024-10-07 07:48:56.871135] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.204 [2024-10-07 07:48:56.872929] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.204 [2024-10-07 07:48:56.881810] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.204 [2024-10-07 07:48:56.882248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.882528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.882559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.204 [2024-10-07 07:48:56.882582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.204 [2024-10-07 07:48:56.882917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.204 [2024-10-07 07:48:56.883063] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.204 [2024-10-07 07:48:56.883071] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.204 [2024-10-07 07:48:56.883078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.204 [2024-10-07 07:48:56.884767] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.204 [2024-10-07 07:48:56.893505] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.204 [2024-10-07 07:48:56.893887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.894156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.894167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.204 [2024-10-07 07:48:56.894174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.204 [2024-10-07 07:48:56.894271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.204 [2024-10-07 07:48:56.894367] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.204 [2024-10-07 07:48:56.894374] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.204 [2024-10-07 07:48:56.894380] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.204 [2024-10-07 07:48:56.896167] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.204 [2024-10-07 07:48:56.905423] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.204 [2024-10-07 07:48:56.905808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.906069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.906080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.204 [2024-10-07 07:48:56.906088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.204 [2024-10-07 07:48:56.906184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.204 [2024-10-07 07:48:56.906323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.204 [2024-10-07 07:48:56.906331] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.204 [2024-10-07 07:48:56.906341] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.204 [2024-10-07 07:48:56.908117] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.204 [2024-10-07 07:48:56.917252] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.204 [2024-10-07 07:48:56.917686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.917913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.917945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.204 [2024-10-07 07:48:56.917968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.204 [2024-10-07 07:48:56.918366] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.204 [2024-10-07 07:48:56.918801] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.204 [2024-10-07 07:48:56.918826] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.204 [2024-10-07 07:48:56.918847] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.204 [2024-10-07 07:48:56.920664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.204 [2024-10-07 07:48:56.928988] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.204 [2024-10-07 07:48:56.929412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.929698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.204 [2024-10-07 07:48:56.929730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.204 [2024-10-07 07:48:56.929753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.204 [2024-10-07 07:48:56.930098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.205 [2024-10-07 07:48:56.930533] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.205 [2024-10-07 07:48:56.930558] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.205 [2024-10-07 07:48:56.930579] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.205 [2024-10-07 07:48:56.932548] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.205 [2024-10-07 07:48:56.940702] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.205 [2024-10-07 07:48:56.941161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:56.941487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:56.941519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.205 [2024-10-07 07:48:56.941542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.205 [2024-10-07 07:48:56.941873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.205 [2024-10-07 07:48:56.942184] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.205 [2024-10-07 07:48:56.942193] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.205 [2024-10-07 07:48:56.942199] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.205 [2024-10-07 07:48:56.943895] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.205 [2024-10-07 07:48:56.952506] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.205 [2024-10-07 07:48:56.952895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:56.953115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:56.953126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.205 [2024-10-07 07:48:56.953133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.205 [2024-10-07 07:48:56.953244] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.205 [2024-10-07 07:48:56.953312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.205 [2024-10-07 07:48:56.953319] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.205 [2024-10-07 07:48:56.953325] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.205 [2024-10-07 07:48:56.955033] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.205 [2024-10-07 07:48:56.964289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.205 [2024-10-07 07:48:56.964659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:56.964938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:56.964948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.205 [2024-10-07 07:48:56.964974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.205 [2024-10-07 07:48:56.965372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.205 [2024-10-07 07:48:56.965706] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.205 [2024-10-07 07:48:56.965731] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.205 [2024-10-07 07:48:56.965752] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.205 [2024-10-07 07:48:56.967679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.205 [2024-10-07 07:48:56.976018] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.205 [2024-10-07 07:48:56.976381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:56.976660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:56.976691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.205 [2024-10-07 07:48:56.976713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.205 [2024-10-07 07:48:56.977115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.205 [2024-10-07 07:48:56.977260] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.205 [2024-10-07 07:48:56.977268] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.205 [2024-10-07 07:48:56.977275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.205 [2024-10-07 07:48:56.979046] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.205 [2024-10-07 07:48:56.987853] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.205 [2024-10-07 07:48:56.988264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:56.988481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:56.988492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.205 [2024-10-07 07:48:56.988499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.205 [2024-10-07 07:48:56.988613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.205 [2024-10-07 07:48:56.988697] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.205 [2024-10-07 07:48:56.988704] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.205 [2024-10-07 07:48:56.988711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.205 [2024-10-07 07:48:56.990512] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.205 [2024-10-07 07:48:56.999702] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.205 [2024-10-07 07:48:57.000144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:57.000482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:57.000514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.205 [2024-10-07 07:48:57.000537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.205 [2024-10-07 07:48:57.000917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.205 [2024-10-07 07:48:57.001206] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.205 [2024-10-07 07:48:57.001220] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.205 [2024-10-07 07:48:57.001230] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.205 [2024-10-07 07:48:57.004112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.205 [2024-10-07 07:48:57.012314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.205 [2024-10-07 07:48:57.012720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:57.013045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:57.013092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.205 [2024-10-07 07:48:57.013116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.205 [2024-10-07 07:48:57.013386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.205 [2024-10-07 07:48:57.013523] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.205 [2024-10-07 07:48:57.013531] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.205 [2024-10-07 07:48:57.013538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.205 [2024-10-07 07:48:57.015461] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.205 [2024-10-07 07:48:57.024131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.205 [2024-10-07 07:48:57.024518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:57.024797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:57.024806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.205 [2024-10-07 07:48:57.024813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.205 [2024-10-07 07:48:57.024877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.205 [2024-10-07 07:48:57.025009] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.205 [2024-10-07 07:48:57.025016] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.205 [2024-10-07 07:48:57.025021] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.205 [2024-10-07 07:48:57.026761] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.205 [2024-10-07 07:48:57.036046] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.205 [2024-10-07 07:48:57.036430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:57.036687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.205 [2024-10-07 07:48:57.036697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.205 [2024-10-07 07:48:57.036703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.205 [2024-10-07 07:48:57.036794] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.205 [2024-10-07 07:48:57.036885] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.205 [2024-10-07 07:48:57.036892] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.205 [2024-10-07 07:48:57.036898] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.205 [2024-10-07 07:48:57.038615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.205 [2024-10-07 07:48:57.047871] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.206 [2024-10-07 07:48:57.048276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.048527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.048537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.206 [2024-10-07 07:48:57.048544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.206 [2024-10-07 07:48:57.048684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.206 [2024-10-07 07:48:57.048766] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.206 [2024-10-07 07:48:57.048774] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.206 [2024-10-07 07:48:57.048780] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.206 [2024-10-07 07:48:57.050540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.206 [2024-10-07 07:48:57.059815] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.206 [2024-10-07 07:48:57.060201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.060421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.060433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.206 [2024-10-07 07:48:57.060440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.206 [2024-10-07 07:48:57.060531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.206 [2024-10-07 07:48:57.060636] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.206 [2024-10-07 07:48:57.060642] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.206 [2024-10-07 07:48:57.060648] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.206 [2024-10-07 07:48:57.062449] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.206 [2024-10-07 07:48:57.071701] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.206 [2024-10-07 07:48:57.072088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.072294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.072325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.206 [2024-10-07 07:48:57.072348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.206 [2024-10-07 07:48:57.072729] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.206 [2024-10-07 07:48:57.073021] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.206 [2024-10-07 07:48:57.073029] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.206 [2024-10-07 07:48:57.073035] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.206 [2024-10-07 07:48:57.074649] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.206 [2024-10-07 07:48:57.083454] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.206 [2024-10-07 07:48:57.083786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.084040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.084051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.206 [2024-10-07 07:48:57.084063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.206 [2024-10-07 07:48:57.084189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.206 [2024-10-07 07:48:57.084270] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.206 [2024-10-07 07:48:57.084277] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.206 [2024-10-07 07:48:57.084283] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.206 [2024-10-07 07:48:57.085927] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.206 [2024-10-07 07:48:57.095166] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.206 [2024-10-07 07:48:57.095551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.095814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.095824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.206 [2024-10-07 07:48:57.095834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.206 [2024-10-07 07:48:57.095925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.206 [2024-10-07 07:48:57.096043] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.206 [2024-10-07 07:48:57.096050] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.206 [2024-10-07 07:48:57.096056] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.206 [2024-10-07 07:48:57.097792] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.206 [2024-10-07 07:48:57.106851] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.206 [2024-10-07 07:48:57.107239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.107514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.107524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.206 [2024-10-07 07:48:57.107532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.206 [2024-10-07 07:48:57.107671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.206 [2024-10-07 07:48:57.107767] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.206 [2024-10-07 07:48:57.107775] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.206 [2024-10-07 07:48:57.107781] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.206 [2024-10-07 07:48:57.109537] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.206 [2024-10-07 07:48:57.118710] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.206 [2024-10-07 07:48:57.119033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.119320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.119353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.206 [2024-10-07 07:48:57.119376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.206 [2024-10-07 07:48:57.119658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.206 [2024-10-07 07:48:57.120155] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.206 [2024-10-07 07:48:57.120181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.206 [2024-10-07 07:48:57.120203] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.206 [2024-10-07 07:48:57.121877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.206 [2024-10-07 07:48:57.130539] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.206 [2024-10-07 07:48:57.130961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.131296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.131331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.206 [2024-10-07 07:48:57.131354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.206 [2024-10-07 07:48:57.131598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.206 [2024-10-07 07:48:57.131724] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.206 [2024-10-07 07:48:57.131732] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.206 [2024-10-07 07:48:57.131739] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.206 [2024-10-07 07:48:57.133610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.206 [2024-10-07 07:48:57.142341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.206 [2024-10-07 07:48:57.142752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.142956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.206 [2024-10-07 07:48:57.142966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.207 [2024-10-07 07:48:57.142973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.207 [2024-10-07 07:48:57.143099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.207 [2024-10-07 07:48:57.143197] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.207 [2024-10-07 07:48:57.143204] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.207 [2024-10-07 07:48:57.143210] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.207 [2024-10-07 07:48:57.144954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.207 [2024-10-07 07:48:57.154221] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.207 [2024-10-07 07:48:57.154648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.207 [2024-10-07 07:48:57.154849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.207 [2024-10-07 07:48:57.154860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.207 [2024-10-07 07:48:57.154867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.207 [2024-10-07 07:48:57.154994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.207 [2024-10-07 07:48:57.155097] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.207 [2024-10-07 07:48:57.155106] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.207 [2024-10-07 07:48:57.155114] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.207 [2024-10-07 07:48:57.156983] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.468 [2024-10-07 07:48:57.166244] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.468 [2024-10-07 07:48:57.166544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.166740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.166750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.468 [2024-10-07 07:48:57.166758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.468 [2024-10-07 07:48:57.166882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.468 [2024-10-07 07:48:57.166950] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.468 [2024-10-07 07:48:57.166960] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.468 [2024-10-07 07:48:57.166966] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.468 [2024-10-07 07:48:57.168572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.468 [2024-10-07 07:48:57.178177] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.468 [2024-10-07 07:48:57.178445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.178582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.178593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.468 [2024-10-07 07:48:57.178600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.468 [2024-10-07 07:48:57.178729] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.468 [2024-10-07 07:48:57.178798] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.468 [2024-10-07 07:48:57.178806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.468 [2024-10-07 07:48:57.178812] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.468 [2024-10-07 07:48:57.180570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.468 [2024-10-07 07:48:57.190003] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.468 [2024-10-07 07:48:57.190333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.190531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.190542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.468 [2024-10-07 07:48:57.190550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.468 [2024-10-07 07:48:57.190708] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.468 [2024-10-07 07:48:57.190840] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.468 [2024-10-07 07:48:57.190848] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.468 [2024-10-07 07:48:57.190854] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.468 [2024-10-07 07:48:57.192678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.468 [2024-10-07 07:48:57.202117] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.468 [2024-10-07 07:48:57.202535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.202790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.202821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.468 [2024-10-07 07:48:57.202844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.468 [2024-10-07 07:48:57.203188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.468 [2024-10-07 07:48:57.203421] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.468 [2024-10-07 07:48:57.203430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.468 [2024-10-07 07:48:57.203440] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.468 [2024-10-07 07:48:57.205162] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.468 [2024-10-07 07:48:57.213870] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.468 [2024-10-07 07:48:57.214259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.214515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.214525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.468 [2024-10-07 07:48:57.214533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.468 [2024-10-07 07:48:57.214672] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.468 [2024-10-07 07:48:57.214797] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.468 [2024-10-07 07:48:57.214805] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.468 [2024-10-07 07:48:57.214811] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.468 [2024-10-07 07:48:57.216548] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.468 [2024-10-07 07:48:57.225725] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.468 [2024-10-07 07:48:57.226084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.226372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.226404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.468 [2024-10-07 07:48:57.226427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.468 [2024-10-07 07:48:57.226647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.468 [2024-10-07 07:48:57.226772] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.468 [2024-10-07 07:48:57.226780] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.468 [2024-10-07 07:48:57.226787] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.468 [2024-10-07 07:48:57.228618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.468 [2024-10-07 07:48:57.237711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.468 [2024-10-07 07:48:57.238112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.238264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.238275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.468 [2024-10-07 07:48:57.238282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.468 [2024-10-07 07:48:57.238381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.468 [2024-10-07 07:48:57.238466] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.468 [2024-10-07 07:48:57.238475] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.468 [2024-10-07 07:48:57.238483] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.468 [2024-10-07 07:48:57.240260] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.468 [2024-10-07 07:48:57.249637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.468 [2024-10-07 07:48:57.249967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.250175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.468 [2024-10-07 07:48:57.250188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.468 [2024-10-07 07:48:57.250196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.468 [2024-10-07 07:48:57.250308] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.469 [2024-10-07 07:48:57.250405] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.469 [2024-10-07 07:48:57.250412] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.469 [2024-10-07 07:48:57.250418] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.469 [2024-10-07 07:48:57.252212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.469 [2024-10-07 07:48:57.261749] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.469 [2024-10-07 07:48:57.262090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.262268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.262278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.469 [2024-10-07 07:48:57.262285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.469 [2024-10-07 07:48:57.262396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.469 [2024-10-07 07:48:57.262506] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.469 [2024-10-07 07:48:57.262514] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.469 [2024-10-07 07:48:57.262521] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.469 [2024-10-07 07:48:57.264398] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.469 [2024-10-07 07:48:57.273621] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.469 [2024-10-07 07:48:57.274091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.274338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.274348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.469 [2024-10-07 07:48:57.274355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.469 [2024-10-07 07:48:57.274474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.469 [2024-10-07 07:48:57.274579] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.469 [2024-10-07 07:48:57.274586] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.469 [2024-10-07 07:48:57.274592] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.469 [2024-10-07 07:48:57.276259] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.469 [2024-10-07 07:48:57.285395] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.469 [2024-10-07 07:48:57.285749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.285949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.285980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.469 [2024-10-07 07:48:57.286003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.469 [2024-10-07 07:48:57.286352] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.469 [2024-10-07 07:48:57.286735] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.469 [2024-10-07 07:48:57.286760] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.469 [2024-10-07 07:48:57.286781] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.469 [2024-10-07 07:48:57.288631] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.469 [2024-10-07 07:48:57.297207] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.469 [2024-10-07 07:48:57.297630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.297923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.297955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.469 [2024-10-07 07:48:57.297979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.469 [2024-10-07 07:48:57.298372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.469 [2024-10-07 07:48:57.298806] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.469 [2024-10-07 07:48:57.298831] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.469 [2024-10-07 07:48:57.298852] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.469 [2024-10-07 07:48:57.300565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.469 [2024-10-07 07:48:57.309145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.469 [2024-10-07 07:48:57.309485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.309690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.309700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.469 [2024-10-07 07:48:57.309707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.469 [2024-10-07 07:48:57.309833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.469 [2024-10-07 07:48:57.309914] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.469 [2024-10-07 07:48:57.309921] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.469 [2024-10-07 07:48:57.309927] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.469 [2024-10-07 07:48:57.311574] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.469 [2024-10-07 07:48:57.320789] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.469 [2024-10-07 07:48:57.321156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.321417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.321449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.469 [2024-10-07 07:48:57.321473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.469 [2024-10-07 07:48:57.321753] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.469 [2024-10-07 07:48:57.321981] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.469 [2024-10-07 07:48:57.321989] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.469 [2024-10-07 07:48:57.321995] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.469 [2024-10-07 07:48:57.323714] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.469 [2024-10-07 07:48:57.332777] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.469 [2024-10-07 07:48:57.333229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.333387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.333397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.469 [2024-10-07 07:48:57.333405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.469 [2024-10-07 07:48:57.333558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.469 [2024-10-07 07:48:57.333669] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.469 [2024-10-07 07:48:57.333677] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.469 [2024-10-07 07:48:57.333683] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.469 [2024-10-07 07:48:57.335325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.469 [2024-10-07 07:48:57.344762] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.469 [2024-10-07 07:48:57.345130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.345409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.345420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.469 [2024-10-07 07:48:57.345427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.469 [2024-10-07 07:48:57.345523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.469 [2024-10-07 07:48:57.345648] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.469 [2024-10-07 07:48:57.345656] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.469 [2024-10-07 07:48:57.345662] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.469 [2024-10-07 07:48:57.347289] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.469 [2024-10-07 07:48:57.356566] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.469 [2024-10-07 07:48:57.357014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.357343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.469 [2024-10-07 07:48:57.357392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.469 [2024-10-07 07:48:57.357416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.469 [2024-10-07 07:48:57.357649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.469 [2024-10-07 07:48:57.357804] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.469 [2024-10-07 07:48:57.357812] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.469 [2024-10-07 07:48:57.357818] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.469 [2024-10-07 07:48:57.359611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.470 [2024-10-07 07:48:57.368363] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.470 [2024-10-07 07:48:57.368698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.470 [2024-10-07 07:48:57.368966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.470 [2024-10-07 07:48:57.368976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.470 [2024-10-07 07:48:57.368983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.470 [2024-10-07 07:48:57.369098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.470 [2024-10-07 07:48:57.369195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.470 [2024-10-07 07:48:57.369202] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.470 [2024-10-07 07:48:57.369209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.470 [2024-10-07 07:48:57.370985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.470 [2024-10-07 07:48:57.380057] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.470 [2024-10-07 07:48:57.380519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.470 [2024-10-07 07:48:57.380679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.470 [2024-10-07 07:48:57.380689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.470 [2024-10-07 07:48:57.380697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.470 [2024-10-07 07:48:57.380822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.470 [2024-10-07 07:48:57.380946] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.470 [2024-10-07 07:48:57.380954] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.470 [2024-10-07 07:48:57.380961] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.470 [2024-10-07 07:48:57.382684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.470 [2024-10-07 07:48:57.391860] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.470 [2024-10-07 07:48:57.392210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.470 [2024-10-07 07:48:57.392465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.470 [2024-10-07 07:48:57.392497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.470 [2024-10-07 07:48:57.392528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.470 [2024-10-07 07:48:57.392824] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.470 [2024-10-07 07:48:57.392920] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.470 [2024-10-07 07:48:57.392928] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.470 [2024-10-07 07:48:57.392934] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.470 [2024-10-07 07:48:57.394483] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.470 [2024-10-07 07:48:57.403690] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.470 [2024-10-07 07:48:57.404094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.470 [2024-10-07 07:48:57.404284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.470 [2024-10-07 07:48:57.404315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.470 [2024-10-07 07:48:57.404339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.470 [2024-10-07 07:48:57.404770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.470 [2024-10-07 07:48:57.405141] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.470 [2024-10-07 07:48:57.405149] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.470 [2024-10-07 07:48:57.405156] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.470 [2024-10-07 07:48:57.407499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.470 [2024-10-07 07:48:57.416498] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.470 [2024-10-07 07:48:57.416913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.470 [2024-10-07 07:48:57.417133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.470 [2024-10-07 07:48:57.417168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.470 [2024-10-07 07:48:57.417191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.470 [2024-10-07 07:48:57.417672] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.470 [2024-10-07 07:48:57.418055] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.470 [2024-10-07 07:48:57.418091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.470 [2024-10-07 07:48:57.418113] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.470 [2024-10-07 07:48:57.420051] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.470 [2024-10-07 07:48:57.428426] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.470 [2024-10-07 07:48:57.428744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.470 [2024-10-07 07:48:57.429081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.470 [2024-10-07 07:48:57.429115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.470 [2024-10-07 07:48:57.429139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.470 [2024-10-07 07:48:57.429355] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.470 [2024-10-07 07:48:57.429438] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.470 [2024-10-07 07:48:57.429446] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.470 [2024-10-07 07:48:57.429452] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.470 [2024-10-07 07:48:57.431139] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.731 [2024-10-07 07:48:57.440219] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.731 [2024-10-07 07:48:57.440628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-10-07 07:48:57.440957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-10-07 07:48:57.440989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.731 [2024-10-07 07:48:57.441012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.731 [2024-10-07 07:48:57.441459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.731 [2024-10-07 07:48:57.441845] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.731 [2024-10-07 07:48:57.441869] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.731 [2024-10-07 07:48:57.441891] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.731 [2024-10-07 07:48:57.443936] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.731 [2024-10-07 07:48:57.452043] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.731 [2024-10-07 07:48:57.452338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-10-07 07:48:57.452554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-10-07 07:48:57.452587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.731 [2024-10-07 07:48:57.452610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.731 [2024-10-07 07:48:57.452966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.731 [2024-10-07 07:48:57.453048] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.731 [2024-10-07 07:48:57.453056] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.731 [2024-10-07 07:48:57.453067] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.731 [2024-10-07 07:48:57.454705] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.731 [2024-10-07 07:48:57.463958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.731 [2024-10-07 07:48:57.464282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-10-07 07:48:57.464519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-10-07 07:48:57.464551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.731 [2024-10-07 07:48:57.464574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.731 [2024-10-07 07:48:57.464855] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.731 [2024-10-07 07:48:57.465034] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.731 [2024-10-07 07:48:57.465042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.731 [2024-10-07 07:48:57.465048] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.731 [2024-10-07 07:48:57.466615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.731 [2024-10-07 07:48:57.475731] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.731 [2024-10-07 07:48:57.476103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-10-07 07:48:57.476314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-10-07 07:48:57.476323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.731 [2024-10-07 07:48:57.476331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.731 [2024-10-07 07:48:57.476441] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.731 [2024-10-07 07:48:57.476551] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.731 [2024-10-07 07:48:57.476565] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.731 [2024-10-07 07:48:57.476571] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.731 [2024-10-07 07:48:57.478350] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.731 [2024-10-07 07:48:57.487742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.731 [2024-10-07 07:48:57.488169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-10-07 07:48:57.488329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-10-07 07:48:57.488340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.731 [2024-10-07 07:48:57.488347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.731 [2024-10-07 07:48:57.488461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.731 [2024-10-07 07:48:57.488576] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.731 [2024-10-07 07:48:57.488583] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.732 [2024-10-07 07:48:57.488590] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.732 [2024-10-07 07:48:57.490364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.732 [2024-10-07 07:48:57.499727] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.732 [2024-10-07 07:48:57.500177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.500318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.500328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.732 [2024-10-07 07:48:57.500336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.732 [2024-10-07 07:48:57.500494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.732 [2024-10-07 07:48:57.500607] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.732 [2024-10-07 07:48:57.500618] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.732 [2024-10-07 07:48:57.500625] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.732 [2024-10-07 07:48:57.502341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.732 [2024-10-07 07:48:57.511818] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.732 [2024-10-07 07:48:57.512187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.512444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.512454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.732 [2024-10-07 07:48:57.512461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.732 [2024-10-07 07:48:57.512586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.732 [2024-10-07 07:48:57.512711] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.732 [2024-10-07 07:48:57.512719] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.732 [2024-10-07 07:48:57.512726] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.732 [2024-10-07 07:48:57.514417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.732 [2024-10-07 07:48:57.523700] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.732 [2024-10-07 07:48:57.524070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.524279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.524310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.732 [2024-10-07 07:48:57.524332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.732 [2024-10-07 07:48:57.524613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.732 [2024-10-07 07:48:57.525108] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.732 [2024-10-07 07:48:57.525135] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.732 [2024-10-07 07:48:57.525157] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.732 [2024-10-07 07:48:57.527167] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.732 [2024-10-07 07:48:57.535656] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.732 [2024-10-07 07:48:57.536020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.536282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.536293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.732 [2024-10-07 07:48:57.536300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.732 [2024-10-07 07:48:57.536426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.732 [2024-10-07 07:48:57.536552] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.732 [2024-10-07 07:48:57.536560] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.732 [2024-10-07 07:48:57.536570] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.732 [2024-10-07 07:48:57.538307] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.732 [2024-10-07 07:48:57.547564] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.732 [2024-10-07 07:48:57.547983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.548249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.548282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.732 [2024-10-07 07:48:57.548305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.732 [2024-10-07 07:48:57.548515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.732 [2024-10-07 07:48:57.548612] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.732 [2024-10-07 07:48:57.548620] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.732 [2024-10-07 07:48:57.548626] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.732 [2024-10-07 07:48:57.550420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.732 [2024-10-07 07:48:57.559437] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.732 [2024-10-07 07:48:57.559873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.560131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.560162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.732 [2024-10-07 07:48:57.560187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.732 [2024-10-07 07:48:57.560568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.732 [2024-10-07 07:48:57.560951] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.732 [2024-10-07 07:48:57.560976] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.732 [2024-10-07 07:48:57.560998] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.732 [2024-10-07 07:48:57.562970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.732 [2024-10-07 07:48:57.571154] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.732 [2024-10-07 07:48:57.571443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.571654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.571685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.732 [2024-10-07 07:48:57.571708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.732 [2024-10-07 07:48:57.572151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.732 [2024-10-07 07:48:57.572282] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.732 [2024-10-07 07:48:57.572290] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.732 [2024-10-07 07:48:57.572296] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.732 [2024-10-07 07:48:57.573884] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.732 [2024-10-07 07:48:57.583015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.732 [2024-10-07 07:48:57.583469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.583774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.583805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.732 [2024-10-07 07:48:57.583828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.732 [2024-10-07 07:48:57.584218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.732 [2024-10-07 07:48:57.584553] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.732 [2024-10-07 07:48:57.584589] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.732 [2024-10-07 07:48:57.584595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.732 [2024-10-07 07:48:57.586168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.732 [2024-10-07 07:48:57.594843] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.732 [2024-10-07 07:48:57.595171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.595446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-10-07 07:48:57.595456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.732 [2024-10-07 07:48:57.595464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.732 [2024-10-07 07:48:57.595559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.732 [2024-10-07 07:48:57.595641] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.732 [2024-10-07 07:48:57.595648] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.732 [2024-10-07 07:48:57.595654] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.732 [2024-10-07 07:48:57.597240] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.732 [2024-10-07 07:48:57.606722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.732 [2024-10-07 07:48:57.607068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.607321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.607331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.733 [2024-10-07 07:48:57.607339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.733 [2024-10-07 07:48:57.607449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.733 [2024-10-07 07:48:57.607545] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.733 [2024-10-07 07:48:57.607552] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.733 [2024-10-07 07:48:57.607558] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.733 [2024-10-07 07:48:57.609172] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.733 [2024-10-07 07:48:57.618604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.733 [2024-10-07 07:48:57.618985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.619267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.619278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.733 [2024-10-07 07:48:57.619285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.733 [2024-10-07 07:48:57.619388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.733 [2024-10-07 07:48:57.619492] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.733 [2024-10-07 07:48:57.619499] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.733 [2024-10-07 07:48:57.619505] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.733 [2024-10-07 07:48:57.621199] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.733 [2024-10-07 07:48:57.630451] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.733 [2024-10-07 07:48:57.630882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.631158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.631169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.733 [2024-10-07 07:48:57.631177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.733 [2024-10-07 07:48:57.631294] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.733 [2024-10-07 07:48:57.631412] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.733 [2024-10-07 07:48:57.631421] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.733 [2024-10-07 07:48:57.631429] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.733 [2024-10-07 07:48:57.633037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.733 [2024-10-07 07:48:57.642391] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.733 [2024-10-07 07:48:57.642787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.643039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.643049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.733 [2024-10-07 07:48:57.643056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.733 [2024-10-07 07:48:57.643200] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.733 [2024-10-07 07:48:57.643310] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.733 [2024-10-07 07:48:57.643318] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.733 [2024-10-07 07:48:57.643324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.733 [2024-10-07 07:48:57.645039] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.733 [2024-10-07 07:48:57.654200] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.733 [2024-10-07 07:48:57.654552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.654820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.654851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.733 [2024-10-07 07:48:57.654874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.733 [2024-10-07 07:48:57.655317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.733 [2024-10-07 07:48:57.655554] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.733 [2024-10-07 07:48:57.655577] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.733 [2024-10-07 07:48:57.655599] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.733 [2024-10-07 07:48:57.657520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.733 [2024-10-07 07:48:57.665911] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.733 [2024-10-07 07:48:57.666351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.666634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.666644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.733 [2024-10-07 07:48:57.666651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.733 [2024-10-07 07:48:57.666762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.733 [2024-10-07 07:48:57.666887] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.733 [2024-10-07 07:48:57.666894] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.733 [2024-10-07 07:48:57.666901] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.733 [2024-10-07 07:48:57.668399] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.733 [2024-10-07 07:48:57.677674] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.733 [2024-10-07 07:48:57.678080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.678279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.678289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.733 [2024-10-07 07:48:57.678295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.733 [2024-10-07 07:48:57.678400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.733 [2024-10-07 07:48:57.678464] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.733 [2024-10-07 07:48:57.678470] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.733 [2024-10-07 07:48:57.678476] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.733 [2024-10-07 07:48:57.680162] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.733 [2024-10-07 07:48:57.689501] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.733 [2024-10-07 07:48:57.689902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.690106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-10-07 07:48:57.690120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.733 [2024-10-07 07:48:57.690127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.733 [2024-10-07 07:48:57.690210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.733 [2024-10-07 07:48:57.690321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.733 [2024-10-07 07:48:57.690328] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.733 [2024-10-07 07:48:57.690334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.733 [2024-10-07 07:48:57.691967] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.995 [2024-10-07 07:48:57.701604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.995 [2024-10-07 07:48:57.702002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-10-07 07:48:57.702230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-10-07 07:48:57.702241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.995 [2024-10-07 07:48:57.702248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.995 [2024-10-07 07:48:57.702357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.995 [2024-10-07 07:48:57.702453] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.995 [2024-10-07 07:48:57.702460] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.995 [2024-10-07 07:48:57.702466] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.995 [2024-10-07 07:48:57.704080] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.995 [2024-10-07 07:48:57.713282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.995 [2024-10-07 07:48:57.713548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-10-07 07:48:57.713754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-10-07 07:48:57.713764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.995 [2024-10-07 07:48:57.713771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.995 [2024-10-07 07:48:57.713882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.995 [2024-10-07 07:48:57.713964] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.995 [2024-10-07 07:48:57.713971] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.995 [2024-10-07 07:48:57.713977] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.995 [2024-10-07 07:48:57.715620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.995 [2024-10-07 07:48:57.725177] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.995 [2024-10-07 07:48:57.725479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-10-07 07:48:57.725737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-10-07 07:48:57.725747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.995 [2024-10-07 07:48:57.725757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.995 [2024-10-07 07:48:57.725821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.995 [2024-10-07 07:48:57.725952] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.995 [2024-10-07 07:48:57.725960] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.995 [2024-10-07 07:48:57.725965] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.995 [2024-10-07 07:48:57.727618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.995 [2024-10-07 07:48:57.737076] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.995 [2024-10-07 07:48:57.737496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-10-07 07:48:57.737822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-10-07 07:48:57.737853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.996 [2024-10-07 07:48:57.737878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.996 [2024-10-07 07:48:57.738024] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.996 [2024-10-07 07:48:57.738157] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.996 [2024-10-07 07:48:57.738166] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.996 [2024-10-07 07:48:57.738172] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.996 [2024-10-07 07:48:57.739949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.996 [2024-10-07 07:48:57.749019] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.996 [2024-10-07 07:48:57.749358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.749635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.749645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.996 [2024-10-07 07:48:57.749652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.996 [2024-10-07 07:48:57.749811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.996 [2024-10-07 07:48:57.749969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.996 [2024-10-07 07:48:57.749977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.996 [2024-10-07 07:48:57.749984] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.996 [2024-10-07 07:48:57.751607] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.996 [2024-10-07 07:48:57.760761] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.996 [2024-10-07 07:48:57.761164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.761316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.761326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.996 [2024-10-07 07:48:57.761333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.996 [2024-10-07 07:48:57.761468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.996 [2024-10-07 07:48:57.761572] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.996 [2024-10-07 07:48:57.761580] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.996 [2024-10-07 07:48:57.761586] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.996 [2024-10-07 07:48:57.763291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.996 [2024-10-07 07:48:57.772581] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.996 [2024-10-07 07:48:57.773010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.773347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.773381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.996 [2024-10-07 07:48:57.773404] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.996 [2024-10-07 07:48:57.773784] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.996 [2024-10-07 07:48:57.774076] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.996 [2024-10-07 07:48:57.774085] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.996 [2024-10-07 07:48:57.774091] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.996 [2024-10-07 07:48:57.775739] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.996 [2024-10-07 07:48:57.784358] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.996 [2024-10-07 07:48:57.784823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.785095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.785107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.996 [2024-10-07 07:48:57.785114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.996 [2024-10-07 07:48:57.785240] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.996 [2024-10-07 07:48:57.785350] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.996 [2024-10-07 07:48:57.785358] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.996 [2024-10-07 07:48:57.785365] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.996 [2024-10-07 07:48:57.786996] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.996 [2024-10-07 07:48:57.796310] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.996 [2024-10-07 07:48:57.796743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.796912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.796922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.996 [2024-10-07 07:48:57.796929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.996 [2024-10-07 07:48:57.797025] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.996 [2024-10-07 07:48:57.797101] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.996 [2024-10-07 07:48:57.797109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.996 [2024-10-07 07:48:57.797115] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.996 [2024-10-07 07:48:57.799067] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.996 [2024-10-07 07:48:57.808125] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.996 [2024-10-07 07:48:57.808452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.808658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.808668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.996 [2024-10-07 07:48:57.808675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.996 [2024-10-07 07:48:57.808786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.996 [2024-10-07 07:48:57.808926] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.996 [2024-10-07 07:48:57.808934] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.996 [2024-10-07 07:48:57.808940] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.996 [2024-10-07 07:48:57.810700] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.996 [2024-10-07 07:48:57.819889] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.996 [2024-10-07 07:48:57.820282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.820464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.820495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.996 [2024-10-07 07:48:57.820519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.996 [2024-10-07 07:48:57.820879] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.996 [2024-10-07 07:48:57.820976] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.996 [2024-10-07 07:48:57.820984] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.996 [2024-10-07 07:48:57.820991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.996 [2024-10-07 07:48:57.822638] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.996 [2024-10-07 07:48:57.831853] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.996 [2024-10-07 07:48:57.832191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.832401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.832434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.996 [2024-10-07 07:48:57.832458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.996 [2024-10-07 07:48:57.832889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.996 [2024-10-07 07:48:57.833000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.996 [2024-10-07 07:48:57.833009] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.996 [2024-10-07 07:48:57.833019] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.996 [2024-10-07 07:48:57.834727] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.996 [2024-10-07 07:48:57.843620] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.996 [2024-10-07 07:48:57.843947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.844211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-10-07 07:48:57.844245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.996 [2024-10-07 07:48:57.844269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.996 [2024-10-07 07:48:57.844600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.996 [2024-10-07 07:48:57.844983] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.996 [2024-10-07 07:48:57.845008] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.997 [2024-10-07 07:48:57.845030] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.997 [2024-10-07 07:48:57.846839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.997 [2024-10-07 07:48:57.855466] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.997 [2024-10-07 07:48:57.855904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.856241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.856273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.997 [2024-10-07 07:48:57.856297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.997 [2024-10-07 07:48:57.856677] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.997 [2024-10-07 07:48:57.857016] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.997 [2024-10-07 07:48:57.857024] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.997 [2024-10-07 07:48:57.857030] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.997 [2024-10-07 07:48:57.858607] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.997 [2024-10-07 07:48:57.867386] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.997 [2024-10-07 07:48:57.867835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.868110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.868144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.997 [2024-10-07 07:48:57.868167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.997 [2024-10-07 07:48:57.868390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.997 [2024-10-07 07:48:57.868486] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.997 [2024-10-07 07:48:57.868494] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.997 [2024-10-07 07:48:57.868501] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.997 [2024-10-07 07:48:57.870254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.997 [2024-10-07 07:48:57.879282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.997 [2024-10-07 07:48:57.879686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.879908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.879940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.997 [2024-10-07 07:48:57.879963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.997 [2024-10-07 07:48:57.880357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.997 [2024-10-07 07:48:57.880741] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.997 [2024-10-07 07:48:57.880765] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.997 [2024-10-07 07:48:57.880787] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.997 [2024-10-07 07:48:57.882846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.997 [2024-10-07 07:48:57.891082] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.997 [2024-10-07 07:48:57.891391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.891605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.891636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.997 [2024-10-07 07:48:57.891659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.997 [2024-10-07 07:48:57.891991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.997 [2024-10-07 07:48:57.892184] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.997 [2024-10-07 07:48:57.892192] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.997 [2024-10-07 07:48:57.892199] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.997 [2024-10-07 07:48:57.893783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.997 [2024-10-07 07:48:57.902759] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.997 [2024-10-07 07:48:57.903181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.903381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.903391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.997 [2024-10-07 07:48:57.903398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.997 [2024-10-07 07:48:57.903481] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.997 [2024-10-07 07:48:57.903562] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.997 [2024-10-07 07:48:57.903569] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.997 [2024-10-07 07:48:57.903576] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.997 [2024-10-07 07:48:57.905381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.997 [2024-10-07 07:48:57.914599] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.997 [2024-10-07 07:48:57.915078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.915402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.915434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.997 [2024-10-07 07:48:57.915457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.997 [2024-10-07 07:48:57.915554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.997 [2024-10-07 07:48:57.915636] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.997 [2024-10-07 07:48:57.915643] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.997 [2024-10-07 07:48:57.915649] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.997 [2024-10-07 07:48:57.917417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.997 [2024-10-07 07:48:57.926306] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.997 [2024-10-07 07:48:57.926592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.926853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.926884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.997 [2024-10-07 07:48:57.926907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.997 [2024-10-07 07:48:57.927323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.997 [2024-10-07 07:48:57.927420] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.997 [2024-10-07 07:48:57.927428] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.997 [2024-10-07 07:48:57.927434] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.997 [2024-10-07 07:48:57.929101] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.997 [2024-10-07 07:48:57.938258] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.997 [2024-10-07 07:48:57.938649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.938860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.938890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.997 [2024-10-07 07:48:57.938914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.997 [2024-10-07 07:48:57.939308] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.997 [2024-10-07 07:48:57.939479] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.997 [2024-10-07 07:48:57.939487] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.997 [2024-10-07 07:48:57.939493] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.997 [2024-10-07 07:48:57.941007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.997 [2024-10-07 07:48:57.949961] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.997 [2024-10-07 07:48:57.950340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.950669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.950701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.997 [2024-10-07 07:48:57.950725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.997 [2024-10-07 07:48:57.951219] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.997 [2024-10-07 07:48:57.951487] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.997 [2024-10-07 07:48:57.951495] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.997 [2024-10-07 07:48:57.951501] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.997 [2024-10-07 07:48:57.954239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.997 [2024-10-07 07:48:57.962322] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.997 [2024-10-07 07:48:57.962751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-10-07 07:48:57.963039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-10-07 07:48:57.963050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:53.998 [2024-10-07 07:48:57.963064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:53.998 [2024-10-07 07:48:57.963169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:53.998 [2024-10-07 07:48:57.963242] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.998 [2024-10-07 07:48:57.963250] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.998 [2024-10-07 07:48:57.963257] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.259 [2024-10-07 07:48:57.965180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.259 [2024-10-07 07:48:57.974252] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.259 [2024-10-07 07:48:57.974652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-10-07 07:48:57.974859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-10-07 07:48:57.974869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.259 [2024-10-07 07:48:57.974877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.259 [2024-10-07 07:48:57.975016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.259 [2024-10-07 07:48:57.975149] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.259 [2024-10-07 07:48:57.975158] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.259 [2024-10-07 07:48:57.975165] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.259 [2024-10-07 07:48:57.977002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.259 [2024-10-07 07:48:57.986109] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.259 [2024-10-07 07:48:57.986552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-10-07 07:48:57.986826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-10-07 07:48:57.986866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.259 [2024-10-07 07:48:57.986890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.259 [2024-10-07 07:48:57.987236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.259 [2024-10-07 07:48:57.987618] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.259 [2024-10-07 07:48:57.987626] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.259 [2024-10-07 07:48:57.987632] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.260 [2024-10-07 07:48:57.989462] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.260 [2024-10-07 07:48:57.998035] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.260 [2024-10-07 07:48:57.998446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:57.998656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:57.998689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.260 [2024-10-07 07:48:57.998712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.260 [2024-10-07 07:48:57.999044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.260 [2024-10-07 07:48:57.999319] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.260 [2024-10-07 07:48:57.999328] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.260 [2024-10-07 07:48:57.999334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.260 [2024-10-07 07:48:58.001131] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.260 [2024-10-07 07:48:58.009955] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.260 [2024-10-07 07:48:58.010415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.010740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.010772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.260 [2024-10-07 07:48:58.010795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.260 [2024-10-07 07:48:58.010976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.260 [2024-10-07 07:48:58.011371] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.260 [2024-10-07 07:48:58.011398] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.260 [2024-10-07 07:48:58.011419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.260 [2024-10-07 07:48:58.013921] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.260 [2024-10-07 07:48:58.022565] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.260 [2024-10-07 07:48:58.022995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.023201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.023213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.260 [2024-10-07 07:48:58.023225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.260 [2024-10-07 07:48:58.023346] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.260 [2024-10-07 07:48:58.023435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.260 [2024-10-07 07:48:58.023444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.260 [2024-10-07 07:48:58.023451] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.260 [2024-10-07 07:48:58.025360] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.260 [2024-10-07 07:48:58.034271] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.260 [2024-10-07 07:48:58.034681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.034929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.034939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.260 [2024-10-07 07:48:58.034946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.260 [2024-10-07 07:48:58.035024] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.260 [2024-10-07 07:48:58.035138] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.260 [2024-10-07 07:48:58.035146] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.260 [2024-10-07 07:48:58.035152] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.260 [2024-10-07 07:48:58.036806] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.260 [2024-10-07 07:48:58.046071] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.260 [2024-10-07 07:48:58.046373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.046573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.046582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.260 [2024-10-07 07:48:58.046589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.260 [2024-10-07 07:48:58.046680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.260 [2024-10-07 07:48:58.046784] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.260 [2024-10-07 07:48:58.046791] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.260 [2024-10-07 07:48:58.046797] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.260 [2024-10-07 07:48:58.048452] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.260 [2024-10-07 07:48:58.057937] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.260 [2024-10-07 07:48:58.058394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.058677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.058709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.260 [2024-10-07 07:48:58.058732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.260 [2024-10-07 07:48:58.059184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.260 [2024-10-07 07:48:58.059569] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.260 [2024-10-07 07:48:58.059594] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.260 [2024-10-07 07:48:58.059615] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.260 [2024-10-07 07:48:58.061402] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.260 [2024-10-07 07:48:58.069875] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.260 [2024-10-07 07:48:58.070225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.070499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.070509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.260 [2024-10-07 07:48:58.070516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.260 [2024-10-07 07:48:58.070635] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.260 [2024-10-07 07:48:58.070753] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.260 [2024-10-07 07:48:58.070761] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.260 [2024-10-07 07:48:58.070767] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.260 [2024-10-07 07:48:58.072473] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.260 [2024-10-07 07:48:58.081757] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.260 [2024-10-07 07:48:58.082165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.082427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.082459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.260 [2024-10-07 07:48:58.082483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.260 [2024-10-07 07:48:58.082813] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.260 [2024-10-07 07:48:58.083361] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.260 [2024-10-07 07:48:58.083389] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.260 [2024-10-07 07:48:58.083409] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.260 [2024-10-07 07:48:58.085403] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.260 [2024-10-07 07:48:58.093625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.260 [2024-10-07 07:48:58.094092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.094343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.094374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.260 [2024-10-07 07:48:58.094398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.260 [2024-10-07 07:48:58.094553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.260 [2024-10-07 07:48:58.094661] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.260 [2024-10-07 07:48:58.094669] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.260 [2024-10-07 07:48:58.094675] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.260 [2024-10-07 07:48:58.096411] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.260 [2024-10-07 07:48:58.105374] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.260 [2024-10-07 07:48:58.105786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.260 [2024-10-07 07:48:58.105935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.105944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.261 [2024-10-07 07:48:58.105951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.261 [2024-10-07 07:48:58.106056] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.261 [2024-10-07 07:48:58.106145] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.261 [2024-10-07 07:48:58.106152] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.261 [2024-10-07 07:48:58.106158] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.261 [2024-10-07 07:48:58.107651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.261 [2024-10-07 07:48:58.117136] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.261 [2024-10-07 07:48:58.117556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.117755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.117765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.261 [2024-10-07 07:48:58.117772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.261 [2024-10-07 07:48:58.117912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.261 [2024-10-07 07:48:58.118007] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.261 [2024-10-07 07:48:58.118015] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.261 [2024-10-07 07:48:58.118022] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.261 [2024-10-07 07:48:58.119664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.261 [2024-10-07 07:48:58.128824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.261 [2024-10-07 07:48:58.129240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.129492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.129501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.261 [2024-10-07 07:48:58.129509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.261 [2024-10-07 07:48:58.129626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.261 [2024-10-07 07:48:58.129744] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.261 [2024-10-07 07:48:58.129755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.261 [2024-10-07 07:48:58.129761] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.261 [2024-10-07 07:48:58.131468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.261 [2024-10-07 07:48:58.140616] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.261 [2024-10-07 07:48:58.141004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.141274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.141306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.261 [2024-10-07 07:48:58.141329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.261 [2024-10-07 07:48:58.141604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.261 [2024-10-07 07:48:58.141701] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.261 [2024-10-07 07:48:58.141708] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.261 [2024-10-07 07:48:58.141715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.261 [2024-10-07 07:48:58.143304] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.261 [2024-10-07 07:48:58.152379] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.261 [2024-10-07 07:48:58.152816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.153105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.153139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.261 [2024-10-07 07:48:58.153162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.261 [2024-10-07 07:48:58.153492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.261 [2024-10-07 07:48:58.153673] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.261 [2024-10-07 07:48:58.153682] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.261 [2024-10-07 07:48:58.153688] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.261 [2024-10-07 07:48:58.155408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.261 [2024-10-07 07:48:58.164224] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.261 [2024-10-07 07:48:58.164558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.164833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.164843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.261 [2024-10-07 07:48:58.164850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.261 [2024-10-07 07:48:58.164990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.261 [2024-10-07 07:48:58.165120] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.261 [2024-10-07 07:48:58.165129] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.261 [2024-10-07 07:48:58.165139] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.261 [2024-10-07 07:48:58.166830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.261 [2024-10-07 07:48:58.175974] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.261 [2024-10-07 07:48:58.176399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.176630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.176640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.261 [2024-10-07 07:48:58.176647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.261 [2024-10-07 07:48:58.176744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.261 [2024-10-07 07:48:58.176825] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.261 [2024-10-07 07:48:58.176832] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.261 [2024-10-07 07:48:58.176839] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.261 [2024-10-07 07:48:58.178406] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.261 [2024-10-07 07:48:58.187900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.261 [2024-10-07 07:48:58.188313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.188568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.188578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.261 [2024-10-07 07:48:58.188586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.261 [2024-10-07 07:48:58.188696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.261 [2024-10-07 07:48:58.188821] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.261 [2024-10-07 07:48:58.188829] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.261 [2024-10-07 07:48:58.188835] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.261 [2024-10-07 07:48:58.190552] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.261 [2024-10-07 07:48:58.199554] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.261 [2024-10-07 07:48:58.199843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.200118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.200129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.261 [2024-10-07 07:48:58.200135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.261 [2024-10-07 07:48:58.200246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.261 [2024-10-07 07:48:58.200357] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.261 [2024-10-07 07:48:58.200364] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.261 [2024-10-07 07:48:58.200370] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.261 [2024-10-07 07:48:58.202023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.261 [2024-10-07 07:48:58.211225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.261 [2024-10-07 07:48:58.211648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.211954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.261 [2024-10-07 07:48:58.211985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.261 [2024-10-07 07:48:58.212008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.261 [2024-10-07 07:48:58.212225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.261 [2024-10-07 07:48:58.212365] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.262 [2024-10-07 07:48:58.212373] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.262 [2024-10-07 07:48:58.212379] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.262 [2024-10-07 07:48:58.214082] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.262 [2024-10-07 07:48:58.223098] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.262 [2024-10-07 07:48:58.223495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.262 [2024-10-07 07:48:58.223748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.262 [2024-10-07 07:48:58.223759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.262 [2024-10-07 07:48:58.223766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.262 [2024-10-07 07:48:58.223879] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.262 [2024-10-07 07:48:58.223978] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.262 [2024-10-07 07:48:58.223985] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.262 [2024-10-07 07:48:58.223991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.262 [2024-10-07 07:48:58.225815] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.522 [2024-10-07 07:48:58.235269] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.522 [2024-10-07 07:48:58.235687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-10-07 07:48:58.235891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-10-07 07:48:58.235901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.522 [2024-10-07 07:48:58.235909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.522 [2024-10-07 07:48:58.236037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.522 [2024-10-07 07:48:58.236186] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.522 [2024-10-07 07:48:58.236195] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.522 [2024-10-07 07:48:58.236201] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.522 [2024-10-07 07:48:58.237958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.522 [2024-10-07 07:48:58.247253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.522 [2024-10-07 07:48:58.247652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-10-07 07:48:58.247916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-10-07 07:48:58.247927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.522 [2024-10-07 07:48:58.247934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.522 [2024-10-07 07:48:58.248083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.522 [2024-10-07 07:48:58.248168] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.522 [2024-10-07 07:48:58.248175] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.522 [2024-10-07 07:48:58.248182] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.522 [2024-10-07 07:48:58.249771] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.522 [2024-10-07 07:48:58.259030] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.522 [2024-10-07 07:48:58.259368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-10-07 07:48:58.259640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-10-07 07:48:58.259650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.522 [2024-10-07 07:48:58.259657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.522 [2024-10-07 07:48:58.259753] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.522 [2024-10-07 07:48:58.259921] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.522 [2024-10-07 07:48:58.259929] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.522 [2024-10-07 07:48:58.259935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.522 [2024-10-07 07:48:58.261646] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.522 [2024-10-07 07:48:58.270955] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.522 [2024-10-07 07:48:58.271355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-10-07 07:48:58.271608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-10-07 07:48:58.271618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.522 [2024-10-07 07:48:58.271625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.523 [2024-10-07 07:48:58.271736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.523 [2024-10-07 07:48:58.271817] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.523 [2024-10-07 07:48:58.271824] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.523 [2024-10-07 07:48:58.271831] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.523 [2024-10-07 07:48:58.273648] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.523 [2024-10-07 07:48:58.282830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.523 [2024-10-07 07:48:58.283224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.283499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.283509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.523 [2024-10-07 07:48:58.283516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.523 [2024-10-07 07:48:58.283607] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.523 [2024-10-07 07:48:58.283698] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.523 [2024-10-07 07:48:58.283705] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.523 [2024-10-07 07:48:58.283711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.523 [2024-10-07 07:48:58.285322] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.523 [2024-10-07 07:48:58.294578] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.523 [2024-10-07 07:48:58.295007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.295346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.295379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.523 [2024-10-07 07:48:58.295403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.523 [2024-10-07 07:48:58.295785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.523 [2024-10-07 07:48:58.296040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.523 [2024-10-07 07:48:58.296048] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.523 [2024-10-07 07:48:58.296054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.523 [2024-10-07 07:48:58.297800] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.523 [2024-10-07 07:48:58.306415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.523 [2024-10-07 07:48:58.306845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.307123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.307135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.523 [2024-10-07 07:48:58.307143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.523 [2024-10-07 07:48:58.307254] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.523 [2024-10-07 07:48:58.307364] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.523 [2024-10-07 07:48:58.307372] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.523 [2024-10-07 07:48:58.307378] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.523 [2024-10-07 07:48:58.309035] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.523 [2024-10-07 07:48:58.318202] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.523 [2024-10-07 07:48:58.318539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.318813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.318825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.523 [2024-10-07 07:48:58.318832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.523 [2024-10-07 07:48:58.318937] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.523 [2024-10-07 07:48:58.319041] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.523 [2024-10-07 07:48:58.319048] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.523 [2024-10-07 07:48:58.319054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.523 [2024-10-07 07:48:58.320966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.523 [2024-10-07 07:48:58.329885] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.523 [2024-10-07 07:48:58.330274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.330527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.330538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.523 [2024-10-07 07:48:58.330548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.523 [2024-10-07 07:48:58.330687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.523 [2024-10-07 07:48:58.330769] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.523 [2024-10-07 07:48:58.330777] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.523 [2024-10-07 07:48:58.330783] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.523 [2024-10-07 07:48:58.332544] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.523 [2024-10-07 07:48:58.341790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.523 [2024-10-07 07:48:58.342226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.342477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.342510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.523 [2024-10-07 07:48:58.342537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.523 [2024-10-07 07:48:58.342918] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.523 [2024-10-07 07:48:58.343325] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.523 [2024-10-07 07:48:58.343334] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.523 [2024-10-07 07:48:58.343340] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.523 [2024-10-07 07:48:58.345072] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.523 [2024-10-07 07:48:58.353449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.523 [2024-10-07 07:48:58.353899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.354206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.354240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.523 [2024-10-07 07:48:58.354271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.523 [2024-10-07 07:48:58.354450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.523 [2024-10-07 07:48:58.354604] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.523 [2024-10-07 07:48:58.354612] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.523 [2024-10-07 07:48:58.354618] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.523 [2024-10-07 07:48:58.356231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.523 [2024-10-07 07:48:58.365311] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.523 [2024-10-07 07:48:58.365678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.365830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.365840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.523 [2024-10-07 07:48:58.365847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.523 [2024-10-07 07:48:58.365958] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.523 [2024-10-07 07:48:58.366101] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.523 [2024-10-07 07:48:58.366110] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.523 [2024-10-07 07:48:58.366116] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.523 [2024-10-07 07:48:58.367685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.523 [2024-10-07 07:48:58.377097] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.523 [2024-10-07 07:48:58.377476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.377702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-10-07 07:48:58.377712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.523 [2024-10-07 07:48:58.377719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.523 [2024-10-07 07:48:58.377801] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.523 [2024-10-07 07:48:58.377897] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.523 [2024-10-07 07:48:58.377905] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.523 [2024-10-07 07:48:58.377911] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.524 [2024-10-07 07:48:58.379605] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.524 [2024-10-07 07:48:58.388861] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.524 [2024-10-07 07:48:58.389288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.389517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.389548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.524 [2024-10-07 07:48:58.389572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.524 [2024-10-07 07:48:58.389859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.524 [2024-10-07 07:48:58.390161] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.524 [2024-10-07 07:48:58.390187] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.524 [2024-10-07 07:48:58.390209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.524 [2024-10-07 07:48:58.392243] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.524 [2024-10-07 07:48:58.400666] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.524 [2024-10-07 07:48:58.400961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.401178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.401188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.524 [2024-10-07 07:48:58.401196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.524 [2024-10-07 07:48:58.401335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.524 [2024-10-07 07:48:58.401459] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.524 [2024-10-07 07:48:58.401467] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.524 [2024-10-07 07:48:58.401474] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.524 [2024-10-07 07:48:58.403221] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.524 [2024-10-07 07:48:58.412426] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.524 [2024-10-07 07:48:58.412861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.413134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.413169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.524 [2024-10-07 07:48:58.413193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.524 [2024-10-07 07:48:58.413475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.524 [2024-10-07 07:48:58.413770] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.524 [2024-10-07 07:48:58.413778] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.524 [2024-10-07 07:48:58.413784] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.524 [2024-10-07 07:48:58.415328] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.524 [2024-10-07 07:48:58.423964] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.524 [2024-10-07 07:48:58.424365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.424641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.424651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.524 [2024-10-07 07:48:58.424658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.524 [2024-10-07 07:48:58.424740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.524 [2024-10-07 07:48:58.424896] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.524 [2024-10-07 07:48:58.424905] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.524 [2024-10-07 07:48:58.424911] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.524 [2024-10-07 07:48:58.426650] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.524 [2024-10-07 07:48:58.435850] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.524 [2024-10-07 07:48:58.436279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.436506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.436515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.524 [2024-10-07 07:48:58.436523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.524 [2024-10-07 07:48:58.436641] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.524 [2024-10-07 07:48:58.436746] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.524 [2024-10-07 07:48:58.436753] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.524 [2024-10-07 07:48:58.436759] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.524 [2024-10-07 07:48:58.438384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.524 [2024-10-07 07:48:58.447748] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.524 [2024-10-07 07:48:58.448152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.448407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.448416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.524 [2024-10-07 07:48:58.448423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.524 [2024-10-07 07:48:58.448542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.524 [2024-10-07 07:48:58.448646] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.524 [2024-10-07 07:48:58.448654] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.524 [2024-10-07 07:48:58.448660] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.524 [2024-10-07 07:48:58.450352] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.524 [2024-10-07 07:48:58.459457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.524 [2024-10-07 07:48:58.459873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.460150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.460161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.524 [2024-10-07 07:48:58.460169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.524 [2024-10-07 07:48:58.460279] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.524 [2024-10-07 07:48:58.460390] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.524 [2024-10-07 07:48:58.460402] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.524 [2024-10-07 07:48:58.460413] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.524 [2024-10-07 07:48:58.462257] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.524 [2024-10-07 07:48:58.471186] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.524 [2024-10-07 07:48:58.471603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.471850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.471860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.524 [2024-10-07 07:48:58.471866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.524 [2024-10-07 07:48:58.472012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.524 [2024-10-07 07:48:58.472081] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.524 [2024-10-07 07:48:58.472105] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.524 [2024-10-07 07:48:58.472112] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.524 [2024-10-07 07:48:58.473829] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.524 [2024-10-07 07:48:58.483006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.524 [2024-10-07 07:48:58.483415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.483687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-10-07 07:48:58.483697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.524 [2024-10-07 07:48:58.483703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.524 [2024-10-07 07:48:58.483808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.524 [2024-10-07 07:48:58.483926] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.524 [2024-10-07 07:48:58.483933] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.524 [2024-10-07 07:48:58.483939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.524 [2024-10-07 07:48:58.485600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.785 [2024-10-07 07:48:58.494832] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.785 [2024-10-07 07:48:58.495261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.785 [2024-10-07 07:48:58.495517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.785 [2024-10-07 07:48:58.495551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.785 [2024-10-07 07:48:58.495575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.785 [2024-10-07 07:48:58.495822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.785 [2024-10-07 07:48:58.495951] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.785 [2024-10-07 07:48:58.495959] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.785 [2024-10-07 07:48:58.495969] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.785 [2024-10-07 07:48:58.497873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.785 [2024-10-07 07:48:58.506594] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.785 [2024-10-07 07:48:58.507008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.785 [2024-10-07 07:48:58.507212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.785 [2024-10-07 07:48:58.507224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.785 [2024-10-07 07:48:58.507231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.785 [2024-10-07 07:48:58.507345] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.785 [2024-10-07 07:48:58.507459] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.785 [2024-10-07 07:48:58.507467] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.785 [2024-10-07 07:48:58.507474] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.785 [2024-10-07 07:48:58.509259] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.785 [2024-10-07 07:48:58.518656] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.785 [2024-10-07 07:48:58.519065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.785 [2024-10-07 07:48:58.519343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.785 [2024-10-07 07:48:58.519353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.785 [2024-10-07 07:48:58.519360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.785 [2024-10-07 07:48:58.519471] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.785 [2024-10-07 07:48:58.519567] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.785 [2024-10-07 07:48:58.519574] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.785 [2024-10-07 07:48:58.519581] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.785 [2024-10-07 07:48:58.521363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.785 [2024-10-07 07:48:58.530597] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.785 [2024-10-07 07:48:58.531004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.785 [2024-10-07 07:48:58.531206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.785 [2024-10-07 07:48:58.531217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.785 [2024-10-07 07:48:58.531253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.785 [2024-10-07 07:48:58.531684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.785 [2024-10-07 07:48:58.531865] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.785 [2024-10-07 07:48:58.531873] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.785 [2024-10-07 07:48:58.531880] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.785 [2024-10-07 07:48:58.533669] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.785 [2024-10-07 07:48:58.542499] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.785 [2024-10-07 07:48:58.542760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.785 [2024-10-07 07:48:58.543044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.785 [2024-10-07 07:48:58.543092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.785 [2024-10-07 07:48:58.543116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.785 [2024-10-07 07:48:58.543348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.785 [2024-10-07 07:48:58.543627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.785 [2024-10-07 07:48:58.543635] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.785 [2024-10-07 07:48:58.543641] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.785 [2024-10-07 07:48:58.545281] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.785 [2024-10-07 07:48:58.554357] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.785 [2024-10-07 07:48:58.554770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.785 [2024-10-07 07:48:58.554978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.785 [2024-10-07 07:48:58.555009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.785 [2024-10-07 07:48:58.555033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.785 [2024-10-07 07:48:58.555331] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.785 [2024-10-07 07:48:58.555598] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.785 [2024-10-07 07:48:58.555607] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.785 [2024-10-07 07:48:58.555613] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.785 [2024-10-07 07:48:58.557272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.785 [2024-10-07 07:48:58.566139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.785 [2024-10-07 07:48:58.566564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.785 [2024-10-07 07:48:58.566893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.785 [2024-10-07 07:48:58.566925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.785 [2024-10-07 07:48:58.566949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.786 [2024-10-07 07:48:58.567349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.786 [2024-10-07 07:48:58.567586] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.786 [2024-10-07 07:48:58.567610] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.786 [2024-10-07 07:48:58.567632] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.786 [2024-10-07 07:48:58.570348] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.786 [2024-10-07 07:48:58.578931] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.786 [2024-10-07 07:48:58.579248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.786 [2024-10-07 07:48:58.579528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.786 [2024-10-07 07:48:58.579539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.786 [2024-10-07 07:48:58.579546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.786 [2024-10-07 07:48:58.579682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.786 [2024-10-07 07:48:58.579819] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.786 [2024-10-07 07:48:58.579828] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.786 [2024-10-07 07:48:58.579835] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.786 [2024-10-07 07:48:58.581775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.786 [2024-10-07 07:48:58.590895] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.786 [2024-10-07 07:48:58.591275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.786 [2024-10-07 07:48:58.591480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.786 [2024-10-07 07:48:58.591490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.786 [2024-10-07 07:48:58.591497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.786 [2024-10-07 07:48:58.591608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.786 [2024-10-07 07:48:58.591703] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.786 [2024-10-07 07:48:58.591711] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.786 [2024-10-07 07:48:58.591717] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.786 [2024-10-07 07:48:58.593439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.786 [2024-10-07 07:48:58.602684] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.786 [2024-10-07 07:48:58.602991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.786 [2024-10-07 07:48:58.603239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.786 [2024-10-07 07:48:58.603273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.786 [2024-10-07 07:48:58.603297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.786 [2024-10-07 07:48:58.603777] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.786 [2024-10-07 07:48:58.603951] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.786 [2024-10-07 07:48:58.603959] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.786 [2024-10-07 07:48:58.603966] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.786 [2024-10-07 07:48:58.605622] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.786 [2024-10-07 07:48:58.614573] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.786 [2024-10-07 07:48:58.614936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.786 [2024-10-07 07:48:58.615152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.786 [2024-10-07 07:48:58.615163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.786 [2024-10-07 07:48:58.615171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.786 [2024-10-07 07:48:58.615285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.786 [2024-10-07 07:48:58.615354] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.786 [2024-10-07 07:48:58.615361] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.786 [2024-10-07 07:48:58.615368] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.786 [2024-10-07 07:48:58.617165] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.786 [2024-10-07 07:48:58.626410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.786 [2024-10-07 07:48:58.626743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.786 [2024-10-07 07:48:58.626944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.786 [2024-10-07 07:48:58.626953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.786 [2024-10-07 07:48:58.626960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.786 [2024-10-07 07:48:58.627083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.786 [2024-10-07 07:48:58.627254] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.786 [2024-10-07 07:48:58.627262] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.786 [2024-10-07 07:48:58.627268] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.786 [2024-10-07 07:48:58.629285] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.786 [2024-10-07 07:48:58.638279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.786 [2024-10-07 07:48:58.638599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.786 [2024-10-07 07:48:58.638883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.786 [2024-10-07 07:48:58.638915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.786 [2024-10-07 07:48:58.638938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.786 [2024-10-07 07:48:58.639332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.786 [2024-10-07 07:48:58.639669] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.786 [2024-10-07 07:48:58.639693] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.786 [2024-10-07 07:48:58.639714] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.786 [2024-10-07 07:48:58.641598] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.786 [2024-10-07 07:48:58.650017] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.786 [2024-10-07 07:48:58.650390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.786 [2024-10-07 07:48:58.650541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.786 [2024-10-07 07:48:58.650552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.786 [2024-10-07 07:48:58.650562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.786 [2024-10-07 07:48:58.650673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.786 [2024-10-07 07:48:58.650798] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.786 [2024-10-07 07:48:58.650806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.786 [2024-10-07 07:48:58.650812] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.787 [2024-10-07 07:48:58.652558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.787 [2024-10-07 07:48:58.661939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.787 [2024-10-07 07:48:58.662307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.662598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.662629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.787 [2024-10-07 07:48:58.662652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.787 [2024-10-07 07:48:58.663146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.787 [2024-10-07 07:48:58.663305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.787 [2024-10-07 07:48:58.663313] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.787 [2024-10-07 07:48:58.663320] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.787 [2024-10-07 07:48:58.665124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.787 [2024-10-07 07:48:58.673843] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.787 [2024-10-07 07:48:58.674251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.674442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.674452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.787 [2024-10-07 07:48:58.674458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.787 [2024-10-07 07:48:58.674563] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.787 [2024-10-07 07:48:58.674681] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.787 [2024-10-07 07:48:58.674689] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.787 [2024-10-07 07:48:58.674696] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.787 [2024-10-07 07:48:58.676442] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.787 [2024-10-07 07:48:58.685734] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.787 [2024-10-07 07:48:58.686085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.686300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.686310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.787 [2024-10-07 07:48:58.686317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.787 [2024-10-07 07:48:58.686446] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.787 [2024-10-07 07:48:58.686570] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.787 [2024-10-07 07:48:58.686578] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.787 [2024-10-07 07:48:58.686584] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.787 [2024-10-07 07:48:58.688309] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.787 [2024-10-07 07:48:58.697700] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.787 [2024-10-07 07:48:58.698075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.698280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.698290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.787 [2024-10-07 07:48:58.698298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.787 [2024-10-07 07:48:58.698394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.787 [2024-10-07 07:48:58.698518] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.787 [2024-10-07 07:48:58.698526] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.787 [2024-10-07 07:48:58.698533] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.787 [2024-10-07 07:48:58.700269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.787 [2024-10-07 07:48:58.709548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.787 [2024-10-07 07:48:58.709893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.710047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.710062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.787 [2024-10-07 07:48:58.710069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.787 [2024-10-07 07:48:58.710195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.787 [2024-10-07 07:48:58.710290] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.787 [2024-10-07 07:48:58.710299] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.787 [2024-10-07 07:48:58.710305] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.787 [2024-10-07 07:48:58.712180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.787 [2024-10-07 07:48:58.721309] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.787 [2024-10-07 07:48:58.721696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.721975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.722007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.787 [2024-10-07 07:48:58.722030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.787 [2024-10-07 07:48:58.722471] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.787 [2024-10-07 07:48:58.722865] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.787 [2024-10-07 07:48:58.722891] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.787 [2024-10-07 07:48:58.722912] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.787 [2024-10-07 07:48:58.724878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.787 [2024-10-07 07:48:58.733176] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.787 [2024-10-07 07:48:58.733551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.733878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.733910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.787 [2024-10-07 07:48:58.733932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.787 [2024-10-07 07:48:58.734165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.787 [2024-10-07 07:48:58.734263] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.787 [2024-10-07 07:48:58.734271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.787 [2024-10-07 07:48:58.734277] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.787 [2024-10-07 07:48:58.736066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.787 [2024-10-07 07:48:58.744956] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.787 [2024-10-07 07:48:58.745316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.745472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.787 [2024-10-07 07:48:58.745483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:54.787 [2024-10-07 07:48:58.745504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:54.787 [2024-10-07 07:48:58.745888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:54.787 [2024-10-07 07:48:58.746199] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.787 [2024-10-07 07:48:58.746208] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.787 [2024-10-07 07:48:58.746215] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.788 [2024-10-07 07:48:58.747970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.047 [2024-10-07 07:48:58.756999] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.047 [2024-10-07 07:48:58.757288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.757441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.757453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.047 [2024-10-07 07:48:58.757462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.047 [2024-10-07 07:48:58.757547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.047 [2024-10-07 07:48:58.757630] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.047 [2024-10-07 07:48:58.757641] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.047 [2024-10-07 07:48:58.757648] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.047 [2024-10-07 07:48:58.759218] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.047 [2024-10-07 07:48:58.768998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.047 [2024-10-07 07:48:58.769303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.769509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.769520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.047 [2024-10-07 07:48:58.769527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.047 [2024-10-07 07:48:58.769639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.047 [2024-10-07 07:48:58.769735] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.047 [2024-10-07 07:48:58.769742] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.047 [2024-10-07 07:48:58.769748] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.047 [2024-10-07 07:48:58.771556] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.047 [2024-10-07 07:48:58.780830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.047 [2024-10-07 07:48:58.781232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.781441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.781452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.047 [2024-10-07 07:48:58.781459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.047 [2024-10-07 07:48:58.781558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.047 [2024-10-07 07:48:58.781671] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.047 [2024-10-07 07:48:58.781680] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.047 [2024-10-07 07:48:58.781686] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.047 [2024-10-07 07:48:58.783489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.047 [2024-10-07 07:48:58.792753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.047 [2024-10-07 07:48:58.793172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.793393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.793404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.047 [2024-10-07 07:48:58.793411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.047 [2024-10-07 07:48:58.793522] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.047 [2024-10-07 07:48:58.793647] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.047 [2024-10-07 07:48:58.793655] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.047 [2024-10-07 07:48:58.793665] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.047 [2024-10-07 07:48:58.795364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.047 [2024-10-07 07:48:58.804598] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.047 [2024-10-07 07:48:58.805034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.805267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.805279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.047 [2024-10-07 07:48:58.805286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.047 [2024-10-07 07:48:58.805411] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.047 [2024-10-07 07:48:58.805507] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.047 [2024-10-07 07:48:58.805515] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.047 [2024-10-07 07:48:58.805521] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.047 [2024-10-07 07:48:58.807355] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.047 [2024-10-07 07:48:58.816333] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.047 [2024-10-07 07:48:58.816652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.816864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.816895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.047 [2024-10-07 07:48:58.816918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.047 [2024-10-07 07:48:58.817310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.047 [2024-10-07 07:48:58.817515] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.047 [2024-10-07 07:48:58.817523] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.047 [2024-10-07 07:48:58.817529] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.047 [2024-10-07 07:48:58.819303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.047 [2024-10-07 07:48:58.828095] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.047 [2024-10-07 07:48:58.828379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.828543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.828574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.047 [2024-10-07 07:48:58.828597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.047 [2024-10-07 07:48:58.829091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.047 [2024-10-07 07:48:58.829304] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.047 [2024-10-07 07:48:58.829312] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.047 [2024-10-07 07:48:58.829319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.047 [2024-10-07 07:48:58.830961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.047 [2024-10-07 07:48:58.839899] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.047 [2024-10-07 07:48:58.840269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.840528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.840560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.047 [2024-10-07 07:48:58.840583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.047 [2024-10-07 07:48:58.840964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.047 [2024-10-07 07:48:58.841260] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.047 [2024-10-07 07:48:58.841286] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.047 [2024-10-07 07:48:58.841308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.047 [2024-10-07 07:48:58.843311] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.047 [2024-10-07 07:48:58.851763] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.047 [2024-10-07 07:48:58.852242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.852388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.852398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.047 [2024-10-07 07:48:58.852405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.047 [2024-10-07 07:48:58.852544] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.047 [2024-10-07 07:48:58.852683] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.047 [2024-10-07 07:48:58.852691] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.047 [2024-10-07 07:48:58.852697] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.047 [2024-10-07 07:48:58.854261] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.047 [2024-10-07 07:48:58.863696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.047 [2024-10-07 07:48:58.864142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.864425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.047 [2024-10-07 07:48:58.864457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.047 [2024-10-07 07:48:58.864480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.047 [2024-10-07 07:48:58.864811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.048 [2024-10-07 07:48:58.865106] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.048 [2024-10-07 07:48:58.865133] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.048 [2024-10-07 07:48:58.865154] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.048 [2024-10-07 07:48:58.867136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.048 [2024-10-07 07:48:58.875482] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.048 [2024-10-07 07:48:58.875761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.875967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.875977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.048 [2024-10-07 07:48:58.875985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.048 [2024-10-07 07:48:58.876099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.048 [2024-10-07 07:48:58.876226] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.048 [2024-10-07 07:48:58.876234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.048 [2024-10-07 07:48:58.876240] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.048 [2024-10-07 07:48:58.877816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.048 [2024-10-07 07:48:58.887161] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.048 [2024-10-07 07:48:58.887521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.887824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.887834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.048 [2024-10-07 07:48:58.887841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.048 [2024-10-07 07:48:58.887966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.048 [2024-10-07 07:48:58.888082] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.048 [2024-10-07 07:48:58.888091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.048 [2024-10-07 07:48:58.888097] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.048 [2024-10-07 07:48:58.889801] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.048 [2024-10-07 07:48:58.898943] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.048 [2024-10-07 07:48:58.899265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.899423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.899433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.048 [2024-10-07 07:48:58.899440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.048 [2024-10-07 07:48:58.899536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.048 [2024-10-07 07:48:58.899647] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.048 [2024-10-07 07:48:58.899654] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.048 [2024-10-07 07:48:58.899660] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.048 [2024-10-07 07:48:58.901405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.048 [2024-10-07 07:48:58.910749] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.048 [2024-10-07 07:48:58.911167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.911379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.911412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.048 [2024-10-07 07:48:58.911436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.048 [2024-10-07 07:48:58.911613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.048 [2024-10-07 07:48:58.911710] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.048 [2024-10-07 07:48:58.911719] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.048 [2024-10-07 07:48:58.911725] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.048 [2024-10-07 07:48:58.913410] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.048 [2024-10-07 07:48:58.922614] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.048 [2024-10-07 07:48:58.923000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.923216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.923250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.048 [2024-10-07 07:48:58.923274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.048 [2024-10-07 07:48:58.923754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.048 [2024-10-07 07:48:58.924026] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.048 [2024-10-07 07:48:58.924035] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.048 [2024-10-07 07:48:58.924042] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.048 [2024-10-07 07:48:58.925747] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.048 [2024-10-07 07:48:58.934395] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.048 [2024-10-07 07:48:58.934786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.934961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.934992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.048 [2024-10-07 07:48:58.935016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.048 [2024-10-07 07:48:58.935312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.048 [2024-10-07 07:48:58.935647] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.048 [2024-10-07 07:48:58.935672] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.048 [2024-10-07 07:48:58.935694] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.048 [2024-10-07 07:48:58.937484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.048 [2024-10-07 07:48:58.946112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.048 [2024-10-07 07:48:58.946342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.946592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.946605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.048 [2024-10-07 07:48:58.946612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.048 [2024-10-07 07:48:58.946708] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.048 [2024-10-07 07:48:58.946833] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.048 [2024-10-07 07:48:58.946841] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.048 [2024-10-07 07:48:58.946847] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.048 [2024-10-07 07:48:58.948492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.048 [2024-10-07 07:48:58.957828] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.048 [2024-10-07 07:48:58.958203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.958463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.958495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.048 [2024-10-07 07:48:58.958518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.048 [2024-10-07 07:48:58.958850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.048 [2024-10-07 07:48:58.959006] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.048 [2024-10-07 07:48:58.959014] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.048 [2024-10-07 07:48:58.959020] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.048 [2024-10-07 07:48:58.960824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.048 [2024-10-07 07:48:58.969664] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.048 [2024-10-07 07:48:58.970103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.970391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.970422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.048 [2024-10-07 07:48:58.970446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.048 [2024-10-07 07:48:58.970827] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.048 [2024-10-07 07:48:58.971022] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.048 [2024-10-07 07:48:58.971035] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.048 [2024-10-07 07:48:58.971045] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.048 [2024-10-07 07:48:58.973835] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.048 [2024-10-07 07:48:58.982347] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.048 [2024-10-07 07:48:58.982672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.982926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.982938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.048 [2024-10-07 07:48:58.982949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.048 [2024-10-07 07:48:58.983075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.048 [2024-10-07 07:48:58.983196] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.048 [2024-10-07 07:48:58.983205] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.048 [2024-10-07 07:48:58.983212] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.048 [2024-10-07 07:48:58.985007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.048 [2024-10-07 07:48:58.994063] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.048 [2024-10-07 07:48:58.994385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.994733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:58.994765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.048 [2024-10-07 07:48:58.994789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.048 [2024-10-07 07:48:58.995014] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.048 [2024-10-07 07:48:58.995129] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.048 [2024-10-07 07:48:58.995137] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.048 [2024-10-07 07:48:58.995144] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.048 [2024-10-07 07:48:58.996972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.048 [2024-10-07 07:48:59.006146] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.048 [2024-10-07 07:48:59.006452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:59.006612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.048 [2024-10-07 07:48:59.006657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.048 [2024-10-07 07:48:59.006682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.048 [2024-10-07 07:48:59.007126] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.048 [2024-10-07 07:48:59.007242] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.048 [2024-10-07 07:48:59.007252] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.048 [2024-10-07 07:48:59.007260] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.048 [2024-10-07 07:48:59.009002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.308 [2024-10-07 07:48:59.017925] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.308 [2024-10-07 07:48:59.018347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.308 [2024-10-07 07:48:59.018574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.308 [2024-10-07 07:48:59.018606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.308 [2024-10-07 07:48:59.018630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.308 [2024-10-07 07:48:59.018843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.308 [2024-10-07 07:48:59.018922] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.308 [2024-10-07 07:48:59.018930] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.308 [2024-10-07 07:48:59.018936] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.308 [2024-10-07 07:48:59.020615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.308 [2024-10-07 07:48:59.029700] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.308 [2024-10-07 07:48:59.030088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.308 [2024-10-07 07:48:59.030294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.308 [2024-10-07 07:48:59.030304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.308 [2024-10-07 07:48:59.030312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.308 [2024-10-07 07:48:59.030451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.309 [2024-10-07 07:48:59.030562] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.309 [2024-10-07 07:48:59.030570] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.309 [2024-10-07 07:48:59.030576] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.309 [2024-10-07 07:48:59.032225] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.309 [2024-10-07 07:48:59.041529] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.309 [2024-10-07 07:48:59.041886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.042173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.042208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.309 [2024-10-07 07:48:59.042231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.309 [2024-10-07 07:48:59.042611] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.309 [2024-10-07 07:48:59.042978] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.309 [2024-10-07 07:48:59.042986] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.309 [2024-10-07 07:48:59.042993] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.309 [2024-10-07 07:48:59.044771] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.309 [2024-10-07 07:48:59.053394] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.309 [2024-10-07 07:48:59.053747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.053950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.053960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.309 [2024-10-07 07:48:59.053967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.309 [2024-10-07 07:48:59.054065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.309 [2024-10-07 07:48:59.054195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.309 [2024-10-07 07:48:59.054202] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.309 [2024-10-07 07:48:59.054209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.309 [2024-10-07 07:48:59.055872] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.309 [2024-10-07 07:48:59.065156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.309 [2024-10-07 07:48:59.065542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.065837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.065868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.309 [2024-10-07 07:48:59.065891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.309 [2024-10-07 07:48:59.066184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.309 [2024-10-07 07:48:59.066296] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.309 [2024-10-07 07:48:59.066304] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.309 [2024-10-07 07:48:59.066310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.309 [2024-10-07 07:48:59.068071] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.309 [2024-10-07 07:48:59.076899] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.309 [2024-10-07 07:48:59.077283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.077533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.077543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.309 [2024-10-07 07:48:59.077550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.309 [2024-10-07 07:48:59.077668] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.309 [2024-10-07 07:48:59.077772] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.309 [2024-10-07 07:48:59.077780] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.309 [2024-10-07 07:48:59.077786] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.309 [2024-10-07 07:48:59.079478] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.309 [2024-10-07 07:48:59.088678] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.309 [2024-10-07 07:48:59.089064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.089353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.089363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.309 [2024-10-07 07:48:59.089370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.309 [2024-10-07 07:48:59.089495] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.309 [2024-10-07 07:48:59.089620] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.309 [2024-10-07 07:48:59.089631] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.309 [2024-10-07 07:48:59.089637] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.309 [2024-10-07 07:48:59.091232] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.309 [2024-10-07 07:48:59.100595] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.309 [2024-10-07 07:48:59.101017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.101301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.101336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.309 [2024-10-07 07:48:59.101359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.309 [2024-10-07 07:48:59.101789] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.309 [2024-10-07 07:48:59.102183] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.309 [2024-10-07 07:48:59.102210] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.309 [2024-10-07 07:48:59.102231] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.309 [2024-10-07 07:48:59.105239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.309 [2024-10-07 07:48:59.113137] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.309 [2024-10-07 07:48:59.113512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.113735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.113766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.309 [2024-10-07 07:48:59.113790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.309 [2024-10-07 07:48:59.114188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.309 [2024-10-07 07:48:59.114383] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.309 [2024-10-07 07:48:59.114392] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.309 [2024-10-07 07:48:59.114399] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.309 [2024-10-07 07:48:59.116321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.309 [2024-10-07 07:48:59.125066] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.309 [2024-10-07 07:48:59.125460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.125785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.125816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.309 [2024-10-07 07:48:59.125840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.309 [2024-10-07 07:48:59.126174] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.309 [2024-10-07 07:48:59.126286] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.309 [2024-10-07 07:48:59.126294] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.309 [2024-10-07 07:48:59.126304] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.309 [2024-10-07 07:48:59.128092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.309 [2024-10-07 07:48:59.136841] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.309 [2024-10-07 07:48:59.137227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.137492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.309 [2024-10-07 07:48:59.137501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.309 [2024-10-07 07:48:59.137508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.309 [2024-10-07 07:48:59.137633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.309 [2024-10-07 07:48:59.137729] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.309 [2024-10-07 07:48:59.137736] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.309 [2024-10-07 07:48:59.137743] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.309 [2024-10-07 07:48:59.139543] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.310 [2024-10-07 07:48:59.148697] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.310 [2024-10-07 07:48:59.149105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.149377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.149387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.310 [2024-10-07 07:48:59.149394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.310 [2024-10-07 07:48:59.149505] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.310 [2024-10-07 07:48:59.149630] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.310 [2024-10-07 07:48:59.149637] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.310 [2024-10-07 07:48:59.149643] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.310 [2024-10-07 07:48:59.151280] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.310 [2024-10-07 07:48:59.160580] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.310 [2024-10-07 07:48:59.160975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.161266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.161301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.310 [2024-10-07 07:48:59.161324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.310 [2024-10-07 07:48:59.161656] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.310 [2024-10-07 07:48:59.162038] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.310 [2024-10-07 07:48:59.162072] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.310 [2024-10-07 07:48:59.162095] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.310 [2024-10-07 07:48:59.163901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.310 [2024-10-07 07:48:59.172538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.310 [2024-10-07 07:48:59.172969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.173152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.173163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.310 [2024-10-07 07:48:59.173171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.310 [2024-10-07 07:48:59.173238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.310 [2024-10-07 07:48:59.173320] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.310 [2024-10-07 07:48:59.173327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.310 [2024-10-07 07:48:59.173334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.310 [2024-10-07 07:48:59.175071] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.310 [2024-10-07 07:48:59.184340] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.310 [2024-10-07 07:48:59.184741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.184994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.185004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.310 [2024-10-07 07:48:59.185011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.310 [2024-10-07 07:48:59.185142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.310 [2024-10-07 07:48:59.185224] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.310 [2024-10-07 07:48:59.185231] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.310 [2024-10-07 07:48:59.185238] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.310 [2024-10-07 07:48:59.186929] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.310 [2024-10-07 07:48:59.196064] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.310 [2024-10-07 07:48:59.196456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.196784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.196816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.310 [2024-10-07 07:48:59.196839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.310 [2024-10-07 07:48:59.197235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.310 [2024-10-07 07:48:59.197571] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.310 [2024-10-07 07:48:59.197579] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.310 [2024-10-07 07:48:59.197585] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.310 [2024-10-07 07:48:59.199348] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.310 [2024-10-07 07:48:59.207766] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.310 [2024-10-07 07:48:59.208165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.208441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.208473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.310 [2024-10-07 07:48:59.208497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.310 [2024-10-07 07:48:59.208977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.310 [2024-10-07 07:48:59.209322] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.310 [2024-10-07 07:48:59.209350] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.310 [2024-10-07 07:48:59.209372] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.310 [2024-10-07 07:48:59.211318] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.310 [2024-10-07 07:48:59.219411] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.310 [2024-10-07 07:48:59.219857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.220088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.220099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.310 [2024-10-07 07:48:59.220106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.310 [2024-10-07 07:48:59.220233] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.310 [2024-10-07 07:48:59.220357] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.310 [2024-10-07 07:48:59.220365] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.310 [2024-10-07 07:48:59.220372] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.310 [2024-10-07 07:48:59.222254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.310 [2024-10-07 07:48:59.231193] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.310 [2024-10-07 07:48:59.231592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.231882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.231891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.310 [2024-10-07 07:48:59.231898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.310 [2024-10-07 07:48:59.232015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.310 [2024-10-07 07:48:59.232172] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.310 [2024-10-07 07:48:59.232181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.310 [2024-10-07 07:48:59.232187] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.310 [2024-10-07 07:48:59.233878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.310 [2024-10-07 07:48:59.243002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.310 [2024-10-07 07:48:59.243412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.243626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.243636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.310 [2024-10-07 07:48:59.243643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.310 [2024-10-07 07:48:59.243768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.310 [2024-10-07 07:48:59.243879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.310 [2024-10-07 07:48:59.243886] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.310 [2024-10-07 07:48:59.243893] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.310 [2024-10-07 07:48:59.245553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.310 [2024-10-07 07:48:59.254983] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.310 [2024-10-07 07:48:59.255400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.310 [2024-10-07 07:48:59.255634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.311 [2024-10-07 07:48:59.255667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.311 [2024-10-07 07:48:59.255692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.311 [2024-10-07 07:48:59.256087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.311 [2024-10-07 07:48:59.256372] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.311 [2024-10-07 07:48:59.256398] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.311 [2024-10-07 07:48:59.256428] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.311 [2024-10-07 07:48:59.258250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.311 [2024-10-07 07:48:59.266723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.311 [2024-10-07 07:48:59.267137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.311 [2024-10-07 07:48:59.267471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.311 [2024-10-07 07:48:59.267504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.311 [2024-10-07 07:48:59.267527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.311 [2024-10-07 07:48:59.267665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.311 [2024-10-07 07:48:59.267780] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.311 [2024-10-07 07:48:59.267788] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.311 [2024-10-07 07:48:59.267794] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.311 [2024-10-07 07:48:59.269609] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.571 [2024-10-07 07:48:59.278659] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.571 [2024-10-07 07:48:59.279037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.571 [2024-10-07 07:48:59.279344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.571 [2024-10-07 07:48:59.279377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.571 [2024-10-07 07:48:59.279410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.571 [2024-10-07 07:48:59.279665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.571 [2024-10-07 07:48:59.279803] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.571 [2024-10-07 07:48:59.279811] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.571 [2024-10-07 07:48:59.279817] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.571 [2024-10-07 07:48:59.281522] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.571 [2024-10-07 07:48:59.290579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.571 [2024-10-07 07:48:59.291011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.571 [2024-10-07 07:48:59.291293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.572 [2024-10-07 07:48:59.291304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.572 [2024-10-07 07:48:59.291311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.572 [2024-10-07 07:48:59.291425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.572 [2024-10-07 07:48:59.291553] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.572 [2024-10-07 07:48:59.291562] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.572 [2024-10-07 07:48:59.291569] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.572 [2024-10-07 07:48:59.293332] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.572 [2024-10-07 07:48:59.302475] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.572 [2024-10-07 07:48:59.302861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.572 [2024-10-07 07:48:59.303144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.572 [2024-10-07 07:48:59.303180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.572 [2024-10-07 07:48:59.303204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.572 [2024-10-07 07:48:59.303409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.572 [2024-10-07 07:48:59.303508] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.572 [2024-10-07 07:48:59.303518] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.572 [2024-10-07 07:48:59.303524] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.572 [2024-10-07 07:48:59.305213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.572 [2024-10-07 07:48:59.314279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.572 [2024-10-07 07:48:59.314705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.572 [2024-10-07 07:48:59.314976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.572 [2024-10-07 07:48:59.314988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.572 [2024-10-07 07:48:59.314996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.572 [2024-10-07 07:48:59.315099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.572 [2024-10-07 07:48:59.315226] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.572 [2024-10-07 07:48:59.315235] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.572 [2024-10-07 07:48:59.315242] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.572 [2024-10-07 07:48:59.316910] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.572 [2024-10-07 07:48:59.326111] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.572 [2024-10-07 07:48:59.326488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.572 [2024-10-07 07:48:59.326717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.572 [2024-10-07 07:48:59.326750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.572 [2024-10-07 07:48:59.326775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.572 [2024-10-07 07:48:59.327270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.572 [2024-10-07 07:48:59.327567] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.572 [2024-10-07 07:48:59.327577] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.572 [2024-10-07 07:48:59.327584] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.572 [2024-10-07 07:48:59.329284] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.572 [2024-10-07 07:48:59.337879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.572 [2024-10-07 07:48:59.338312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.572 [2024-10-07 07:48:59.338610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.572 [2024-10-07 07:48:59.338642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.572 [2024-10-07 07:48:59.338666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.572 [2024-10-07 07:48:59.338854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.572 [2024-10-07 07:48:59.338947] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.572 [2024-10-07 07:48:59.338957] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.572 [2024-10-07 07:48:59.338963] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.572 [2024-10-07 07:48:59.340655] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.572 [2024-10-07 07:48:59.349401] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.572 [2024-10-07 07:48:59.349841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.572 [2024-10-07 07:48:59.350127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.572 [2024-10-07 07:48:59.350164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.572 [2024-10-07 07:48:59.350188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.572 [2024-10-07 07:48:59.350535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.572 [2024-10-07 07:48:59.350632] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.572 [2024-10-07 07:48:59.350642] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.572 [2024-10-07 07:48:59.350649] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.572 [2024-10-07 07:48:59.352244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.572 [2024-10-07 07:48:59.361184] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.572 [2024-10-07 07:48:59.361494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.572 [2024-10-07 07:48:59.361808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.572 [2024-10-07 07:48:59.361841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.572 [2024-10-07 07:48:59.361865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.572 [2024-10-07 07:48:59.362257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.572 [2024-10-07 07:48:59.362543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.572 [2024-10-07 07:48:59.362570] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.572 [2024-10-07 07:48:59.362597] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.572 [2024-10-07 07:48:59.364401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.572 [2024-10-07 07:48:59.373181] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.572 [2024-10-07 07:48:59.373544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.572 [2024-10-07 07:48:59.373807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.572 [2024-10-07 07:48:59.373840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.572 [2024-10-07 07:48:59.373863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.572 [2024-10-07 07:48:59.374164] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.573 [2024-10-07 07:48:59.374650] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.573 [2024-10-07 07:48:59.374676] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.573 [2024-10-07 07:48:59.374699] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.573 [2024-10-07 07:48:59.376494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.573 [2024-10-07 07:48:59.385017] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.573 [2024-10-07 07:48:59.385356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.573 [2024-10-07 07:48:59.385692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.573 [2024-10-07 07:48:59.385726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.573 [2024-10-07 07:48:59.385750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.573 [2024-10-07 07:48:59.385971] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.573 [2024-10-07 07:48:59.386051] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.573 [2024-10-07 07:48:59.386070] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.573 [2024-10-07 07:48:59.386076] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.573 [2024-10-07 07:48:59.387847] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.573 [2024-10-07 07:48:59.397031] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.573 [2024-10-07 07:48:59.397490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.573 [2024-10-07 07:48:59.397718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.573 [2024-10-07 07:48:59.397752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.573 [2024-10-07 07:48:59.397776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.573 [2024-10-07 07:48:59.398170] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.573 [2024-10-07 07:48:59.398656] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.573 [2024-10-07 07:48:59.398682] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.573 [2024-10-07 07:48:59.398703] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.573 [2024-10-07 07:48:59.400307] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.573 [2024-10-07 07:48:59.408740] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.573 [2024-10-07 07:48:59.409171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.573 [2024-10-07 07:48:59.409452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.573 [2024-10-07 07:48:59.409484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.573 [2024-10-07 07:48:59.409508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.573 [2024-10-07 07:48:59.409667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.573 [2024-10-07 07:48:59.409760] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.573 [2024-10-07 07:48:59.409770] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.573 [2024-10-07 07:48:59.409776] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.573 [2024-10-07 07:48:59.411471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.573 [2024-10-07 07:48:59.420486] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.573 [2024-10-07 07:48:59.420839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.573 [2024-10-07 07:48:59.421083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.573 [2024-10-07 07:48:59.421118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.573 [2024-10-07 07:48:59.421143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.573 [2024-10-07 07:48:59.421524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.573 [2024-10-07 07:48:59.421727] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.573 [2024-10-07 07:48:59.421738] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.573 [2024-10-07 07:48:59.421748] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.573 [2024-10-07 07:48:59.423509] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.573 [2024-10-07 07:48:59.432365] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.573 [2024-10-07 07:48:59.432793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.573 [2024-10-07 07:48:59.433077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.573 [2024-10-07 07:48:59.433113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.573 [2024-10-07 07:48:59.433143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.573 [2024-10-07 07:48:59.433277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.573 [2024-10-07 07:48:59.433370] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.573 [2024-10-07 07:48:59.433380] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.573 [2024-10-07 07:48:59.433386] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.573 [2024-10-07 07:48:59.435049] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.573 [2024-10-07 07:48:59.444312] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.573 [2024-10-07 07:48:59.444744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.573 [2024-10-07 07:48:59.445052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.573 [2024-10-07 07:48:59.445102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.573 [2024-10-07 07:48:59.445126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.573 [2024-10-07 07:48:59.445458] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.573 [2024-10-07 07:48:59.445748] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.573 [2024-10-07 07:48:59.445757] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.573 [2024-10-07 07:48:59.445764] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.573 [2024-10-07 07:48:59.448036] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.573 [2024-10-07 07:48:59.457125] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.573 [2024-10-07 07:48:59.457557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.573 [2024-10-07 07:48:59.457880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.573 [2024-10-07 07:48:59.457914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.573 [2024-10-07 07:48:59.457938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.573 [2024-10-07 07:48:59.458153] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.573 [2024-10-07 07:48:59.458277] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.573 [2024-10-07 07:48:59.458287] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.573 [2024-10-07 07:48:59.458294] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.573 [2024-10-07 07:48:59.460209] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 99947 Killed "${NVMF_APP[@]}" "$@" 00:29:55.574 07:48:59 -- host/bdevperf.sh@36 -- # tgt_init 00:29:55.574 07:48:59 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:55.574 07:48:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:55.574 07:48:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:55.574 07:48:59 -- common/autotest_common.sh@10 -- # set +x 00:29:55.574 [2024-10-07 07:48:59.469106] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.574 [2024-10-07 07:48:59.469475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.574 [2024-10-07 07:48:59.469684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.574 [2024-10-07 07:48:59.469697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.574 [2024-10-07 07:48:59.469705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.574 [2024-10-07 07:48:59.469819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.574 [2024-10-07 07:48:59.469935] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.574 [2024-10-07 07:48:59.469945] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.574 [2024-10-07 07:48:59.469952] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.574 07:48:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:55.574 07:48:59 -- nvmf/common.sh@469 -- # nvmfpid=101346 00:29:55.574 07:48:59 -- nvmf/common.sh@470 -- # waitforlisten 101346 00:29:55.574 [2024-10-07 07:48:59.471607] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.574 07:48:59 -- common/autotest_common.sh@819 -- # '[' -z 101346 ']' 00:29:55.574 07:48:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.574 07:48:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:55.574 07:48:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.574 07:48:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:55.574 07:48:59 -- common/autotest_common.sh@10 -- # set +x 00:29:55.574 [2024-10-07 07:48:59.481129] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.574 [2024-10-07 07:48:59.481540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.574 [2024-10-07 07:48:59.481797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.574 [2024-10-07 07:48:59.481807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.574 [2024-10-07 07:48:59.481815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.574 [2024-10-07 07:48:59.481944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.574 [2024-10-07 07:48:59.482079] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.574 [2024-10-07 07:48:59.482088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.574 [2024-10-07 07:48:59.482095] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.574 [2024-10-07 07:48:59.483715] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.574 [2024-10-07 07:48:59.493100] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.574 [2024-10-07 07:48:59.493541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.574 [2024-10-07 07:48:59.493816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.574 [2024-10-07 07:48:59.493828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.574 [2024-10-07 07:48:59.493836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.574 [2024-10-07 07:48:59.493979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.574 [2024-10-07 07:48:59.494100] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.574 [2024-10-07 07:48:59.494110] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.574 [2024-10-07 07:48:59.494118] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.574 [2024-10-07 07:48:59.495842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.574 [2024-10-07 07:48:59.502719] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:55.574 [2024-10-07 07:48:59.502759] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:55.574 [2024-10-07 07:48:59.505206] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.574 [2024-10-07 07:48:59.505567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.574 [2024-10-07 07:48:59.505793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.574 [2024-10-07 07:48:59.505806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.574 [2024-10-07 07:48:59.505814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.574 [2024-10-07 07:48:59.505929] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.574 [2024-10-07 07:48:59.506030] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.574 [2024-10-07 07:48:59.506040] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.574 [2024-10-07 07:48:59.506049] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.574 [2024-10-07 07:48:59.507904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.574 [2024-10-07 07:48:59.517199] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.574 [2024-10-07 07:48:59.517613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.574 [2024-10-07 07:48:59.517812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.574 [2024-10-07 07:48:59.517823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.574 [2024-10-07 07:48:59.517832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.574 [2024-10-07 07:48:59.517946] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.574 [2024-10-07 07:48:59.518031] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.574 [2024-10-07 07:48:59.518039] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.574 [2024-10-07 07:48:59.518046] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.574 [2024-10-07 07:48:59.519830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.574 [2024-10-07 07:48:59.529067] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.574 [2024-10-07 07:48:59.529483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.574 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.574 [2024-10-07 07:48:59.529759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.574 [2024-10-07 07:48:59.529771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.574 [2024-10-07 07:48:59.529780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.574 [2024-10-07 07:48:59.529893] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.574 [2024-10-07 07:48:59.530023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.574 [2024-10-07 07:48:59.530033] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.574 [2024-10-07 07:48:59.530040] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.574 [2024-10-07 07:48:59.531822] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.836 [2024-10-07 07:48:59.541140] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.836 [2024-10-07 07:48:59.541550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.541755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.541766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.836 [2024-10-07 07:48:59.541773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.836 [2024-10-07 07:48:59.541887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.836 [2024-10-07 07:48:59.541973] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.836 [2024-10-07 07:48:59.541981] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.836 [2024-10-07 07:48:59.541988] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.836 [2024-10-07 07:48:59.543740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.836 [2024-10-07 07:48:59.553037] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.836 [2024-10-07 07:48:59.553469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.553695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.553706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.836 [2024-10-07 07:48:59.553715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.836 [2024-10-07 07:48:59.553841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.836 [2024-10-07 07:48:59.553953] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.836 [2024-10-07 07:48:59.553962] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.836 [2024-10-07 07:48:59.553969] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.836 [2024-10-07 07:48:59.555761] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.836 [2024-10-07 07:48:59.556687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:55.836 [2024-10-07 07:48:59.565081] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.836 [2024-10-07 07:48:59.565416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.565693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.565705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.836 [2024-10-07 07:48:59.565713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.836 [2024-10-07 07:48:59.565853] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.836 [2024-10-07 07:48:59.565923] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.836 [2024-10-07 07:48:59.565932] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.836 [2024-10-07 07:48:59.565938] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.836 [2024-10-07 07:48:59.567847] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.836 [2024-10-07 07:48:59.577020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.836 [2024-10-07 07:48:59.577350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.577627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.577639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.836 [2024-10-07 07:48:59.577647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.836 [2024-10-07 07:48:59.577772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.836 [2024-10-07 07:48:59.577870] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.836 [2024-10-07 07:48:59.577880] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.836 [2024-10-07 07:48:59.577887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.836 [2024-10-07 07:48:59.579568] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.836 [2024-10-07 07:48:59.588954] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.836 [2024-10-07 07:48:59.589333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.589614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.589627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.836 [2024-10-07 07:48:59.589635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.836 [2024-10-07 07:48:59.589732] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.836 [2024-10-07 07:48:59.589887] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.836 [2024-10-07 07:48:59.589899] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.836 [2024-10-07 07:48:59.589906] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.836 [2024-10-07 07:48:59.591627] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.836 [2024-10-07 07:48:59.600960] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.836 [2024-10-07 07:48:59.601406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.601617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.601630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.836 [2024-10-07 07:48:59.601640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.836 [2024-10-07 07:48:59.601798] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.836 [2024-10-07 07:48:59.601925] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.836 [2024-10-07 07:48:59.601937] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.836 [2024-10-07 07:48:59.601944] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.836 [2024-10-07 07:48:59.603715] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.836 [2024-10-07 07:48:59.612954] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.836 [2024-10-07 07:48:59.613413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.613669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.613682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.836 [2024-10-07 07:48:59.613691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.836 [2024-10-07 07:48:59.613790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.836 [2024-10-07 07:48:59.613903] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.836 [2024-10-07 07:48:59.613912] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.836 [2024-10-07 07:48:59.613920] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.836 [2024-10-07 07:48:59.615658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.836 [2024-10-07 07:48:59.625017] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.836 [2024-10-07 07:48:59.625402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.625680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.836 [2024-10-07 07:48:59.625693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.836 [2024-10-07 07:48:59.625701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.836 [2024-10-07 07:48:59.625770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.836 [2024-10-07 07:48:59.625898] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.836 [2024-10-07 07:48:59.625907] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.836 [2024-10-07 07:48:59.625914] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.837 [2024-10-07 07:48:59.627737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.837 [2024-10-07 07:48:59.628064] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:55.837 [2024-10-07 07:48:59.628165] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:55.837 [2024-10-07 07:48:59.628177] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:55.837 [2024-10-07 07:48:59.628184] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:55.837 [2024-10-07 07:48:59.628217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:55.837 [2024-10-07 07:48:59.628305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:55.837 [2024-10-07 07:48:59.628306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.837 [2024-10-07 07:48:59.636913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.837 [2024-10-07 07:48:59.637264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.637565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.637578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.837 [2024-10-07 07:48:59.637587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.837 [2024-10-07 07:48:59.637705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.837 [2024-10-07 07:48:59.637851] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.837 [2024-10-07 07:48:59.637861] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.837 [2024-10-07 07:48:59.637870] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.837 [2024-10-07 07:48:59.639702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.837 [2024-10-07 07:48:59.648969] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.837 [2024-10-07 07:48:59.649434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.649687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.649700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.837 [2024-10-07 07:48:59.649710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.837 [2024-10-07 07:48:59.649828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.837 [2024-10-07 07:48:59.649944] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.837 [2024-10-07 07:48:59.649954] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.837 [2024-10-07 07:48:59.649962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.837 [2024-10-07 07:48:59.651747] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.837 [2024-10-07 07:48:59.660929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.837 [2024-10-07 07:48:59.661402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.661670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.661683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.837 [2024-10-07 07:48:59.661693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.837 [2024-10-07 07:48:59.661811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.837 [2024-10-07 07:48:59.661957] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.837 [2024-10-07 07:48:59.661974] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.837 [2024-10-07 07:48:59.661983] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.837 [2024-10-07 07:48:59.663699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.837 [2024-10-07 07:48:59.672818] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.837 [2024-10-07 07:48:59.673213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.673474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.673487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.837 [2024-10-07 07:48:59.673496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.837 [2024-10-07 07:48:59.673599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.837 [2024-10-07 07:48:59.673715] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.837 [2024-10-07 07:48:59.673725] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.837 [2024-10-07 07:48:59.673734] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.837 [2024-10-07 07:48:59.675476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.837 [2024-10-07 07:48:59.684788] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.837 [2024-10-07 07:48:59.685221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.685498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.685511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.837 [2024-10-07 07:48:59.685521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.837 [2024-10-07 07:48:59.685623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.837 [2024-10-07 07:48:59.685755] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.837 [2024-10-07 07:48:59.685765] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.837 [2024-10-07 07:48:59.685773] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.837 [2024-10-07 07:48:59.687674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.837 [2024-10-07 07:48:59.696697] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.837 [2024-10-07 07:48:59.697033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.697258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.697270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.837 [2024-10-07 07:48:59.697278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.837 [2024-10-07 07:48:59.697349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.837 [2024-10-07 07:48:59.697420] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.837 [2024-10-07 07:48:59.697429] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.837 [2024-10-07 07:48:59.697442] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.837 [2024-10-07 07:48:59.699194] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.837 [2024-10-07 07:48:59.708726] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.837 [2024-10-07 07:48:59.709139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.709425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.709437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.837 [2024-10-07 07:48:59.709445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.837 [2024-10-07 07:48:59.709589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.837 [2024-10-07 07:48:59.709705] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.837 [2024-10-07 07:48:59.709715] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.837 [2024-10-07 07:48:59.709722] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.837 [2024-10-07 07:48:59.711549] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.837 [2024-10-07 07:48:59.720618] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.837 [2024-10-07 07:48:59.720980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.721187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.837 [2024-10-07 07:48:59.721200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.837 [2024-10-07 07:48:59.721209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.837 [2024-10-07 07:48:59.721325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.837 [2024-10-07 07:48:59.721426] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.837 [2024-10-07 07:48:59.721435] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.837 [2024-10-07 07:48:59.721442] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.837 [2024-10-07 07:48:59.723271] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.837 [2024-10-07 07:48:59.732593] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.838 [2024-10-07 07:48:59.733000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.838 [2024-10-07 07:48:59.733287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.838 [2024-10-07 07:48:59.733299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.838 [2024-10-07 07:48:59.733308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.838 [2024-10-07 07:48:59.733438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.838 [2024-10-07 07:48:59.733524] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.838 [2024-10-07 07:48:59.733533] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.838 [2024-10-07 07:48:59.733541] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.838 [2024-10-07 07:48:59.735242] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.838 [2024-10-07 07:48:59.744668] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.838 [2024-10-07 07:48:59.745070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.838 [2024-10-07 07:48:59.745349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.838 [2024-10-07 07:48:59.745362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.838 [2024-10-07 07:48:59.745370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.838 [2024-10-07 07:48:59.745499] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.838 [2024-10-07 07:48:59.745615] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.838 [2024-10-07 07:48:59.745625] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.838 [2024-10-07 07:48:59.745632] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.838 [2024-10-07 07:48:59.747518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.838 [2024-10-07 07:48:59.756666] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.838 [2024-10-07 07:48:59.757022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.838 [2024-10-07 07:48:59.757295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.838 [2024-10-07 07:48:59.757307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.838 [2024-10-07 07:48:59.757317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.838 [2024-10-07 07:48:59.757432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.838 [2024-10-07 07:48:59.757548] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.838 [2024-10-07 07:48:59.757557] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.838 [2024-10-07 07:48:59.757564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.838 [2024-10-07 07:48:59.759368] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.838 [2024-10-07 07:48:59.768492] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.838 [2024-10-07 07:48:59.768891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.838 [2024-10-07 07:48:59.769119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.838 [2024-10-07 07:48:59.769134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.838 [2024-10-07 07:48:59.769141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.838 [2024-10-07 07:48:59.769257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.838 [2024-10-07 07:48:59.769373] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.838 [2024-10-07 07:48:59.769381] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.838 [2024-10-07 07:48:59.769388] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.838 [2024-10-07 07:48:59.771174] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.838 [2024-10-07 07:48:59.780590] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.838 [2024-10-07 07:48:59.780997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.838 [2024-10-07 07:48:59.781275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.838 [2024-10-07 07:48:59.781287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.838 [2024-10-07 07:48:59.781295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.838 [2024-10-07 07:48:59.781394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.838 [2024-10-07 07:48:59.781480] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.838 [2024-10-07 07:48:59.781488] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.838 [2024-10-07 07:48:59.781495] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.838 [2024-10-07 07:48:59.783426] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.838 [2024-10-07 07:48:59.792601] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.838 [2024-10-07 07:48:59.793001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.838 [2024-10-07 07:48:59.793277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.838 [2024-10-07 07:48:59.793289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:55.838 [2024-10-07 07:48:59.793298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:55.838 [2024-10-07 07:48:59.793413] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:55.838 [2024-10-07 07:48:59.793527] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.838 [2024-10-07 07:48:59.793537] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.838 [2024-10-07 07:48:59.793545] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.838 [2024-10-07 07:48:59.795225] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.838 [2024-10-07 07:48:59.804564] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.099 [2024-10-07 07:48:59.804972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-10-07 07:48:59.805180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-10-07 07:48:59.805192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.099 [2024-10-07 07:48:59.805200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.099 [2024-10-07 07:48:59.805316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.099 [2024-10-07 07:48:59.805417] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.099 [2024-10-07 07:48:59.805426] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.099 [2024-10-07 07:48:59.805433] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.099 [2024-10-07 07:48:59.807166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.099 [2024-10-07 07:48:59.816588] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.099 [2024-10-07 07:48:59.816964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-10-07 07:48:59.817175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-10-07 07:48:59.817189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.099 [2024-10-07 07:48:59.817197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.099 [2024-10-07 07:48:59.817314] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.099 [2024-10-07 07:48:59.817370] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.099 [2024-10-07 07:48:59.817380] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.099 [2024-10-07 07:48:59.817388] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.099 [2024-10-07 07:48:59.818937] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.099 [2024-10-07 07:48:59.828539] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.099 [2024-10-07 07:48:59.828946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-10-07 07:48:59.829141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.099 [2024-10-07 07:48:59.829154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.100 [2024-10-07 07:48:59.829162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.100 [2024-10-07 07:48:59.829262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.100 [2024-10-07 07:48:59.829377] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.100 [2024-10-07 07:48:59.829387] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.100 [2024-10-07 07:48:59.829394] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.100 [2024-10-07 07:48:59.831166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.100 [2024-10-07 07:48:59.840507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.100 [2024-10-07 07:48:59.840943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.841168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.841180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.100 [2024-10-07 07:48:59.841188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.100 [2024-10-07 07:48:59.841288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.100 [2024-10-07 07:48:59.841434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.100 [2024-10-07 07:48:59.841444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.100 [2024-10-07 07:48:59.841451] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.100 [2024-10-07 07:48:59.843407] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.100 [2024-10-07 07:48:59.852436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.100 [2024-10-07 07:48:59.852834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.853021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.853035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.100 [2024-10-07 07:48:59.853043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.100 [2024-10-07 07:48:59.853178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.100 [2024-10-07 07:48:59.853294] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.100 [2024-10-07 07:48:59.853303] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.100 [2024-10-07 07:48:59.853310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.100 [2024-10-07 07:48:59.855183] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.100 [2024-10-07 07:48:59.864294] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.100 [2024-10-07 07:48:59.864712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.864905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.864917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.100 [2024-10-07 07:48:59.864925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.100 [2024-10-07 07:48:59.865054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.100 [2024-10-07 07:48:59.865190] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.100 [2024-10-07 07:48:59.865200] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.100 [2024-10-07 07:48:59.865207] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.100 [2024-10-07 07:48:59.866956] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.100 [2024-10-07 07:48:59.876191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.100 [2024-10-07 07:48:59.876618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.876745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.876757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.100 [2024-10-07 07:48:59.876765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.100 [2024-10-07 07:48:59.876879] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.100 [2024-10-07 07:48:59.876950] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.100 [2024-10-07 07:48:59.876959] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.100 [2024-10-07 07:48:59.876967] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.100 [2024-10-07 07:48:59.878748] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.100 [2024-10-07 07:48:59.888145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.100 [2024-10-07 07:48:59.888559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.888770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.888782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.100 [2024-10-07 07:48:59.888793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.100 [2024-10-07 07:48:59.888878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.100 [2024-10-07 07:48:59.889024] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.100 [2024-10-07 07:48:59.889034] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.100 [2024-10-07 07:48:59.889041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.100 [2024-10-07 07:48:59.890867] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.100 [2024-10-07 07:48:59.900085] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.100 [2024-10-07 07:48:59.900522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.900734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.900746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.100 [2024-10-07 07:48:59.900754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.100 [2024-10-07 07:48:59.900883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.100 [2024-10-07 07:48:59.901013] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.100 [2024-10-07 07:48:59.901023] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.100 [2024-10-07 07:48:59.901029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.100 [2024-10-07 07:48:59.902708] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.100 [2024-10-07 07:48:59.912012] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.100 [2024-10-07 07:48:59.912411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.912632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.912644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.100 [2024-10-07 07:48:59.912651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.100 [2024-10-07 07:48:59.912781] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.100 [2024-10-07 07:48:59.912942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.100 [2024-10-07 07:48:59.912951] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.100 [2024-10-07 07:48:59.912958] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.100 [2024-10-07 07:48:59.914843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.100 [2024-10-07 07:48:59.924106] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.100 [2024-10-07 07:48:59.924515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.924768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.924779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.100 [2024-10-07 07:48:59.924787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.100 [2024-10-07 07:48:59.924935] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.100 [2024-10-07 07:48:59.925036] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.100 [2024-10-07 07:48:59.925045] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.100 [2024-10-07 07:48:59.925052] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.100 [2024-10-07 07:48:59.926736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.100 [2024-10-07 07:48:59.936141] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.100 [2024-10-07 07:48:59.936441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.936644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.100 [2024-10-07 07:48:59.936655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.101 [2024-10-07 07:48:59.936663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.101 [2024-10-07 07:48:59.936778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.101 [2024-10-07 07:48:59.936893] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.101 [2024-10-07 07:48:59.936903] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.101 [2024-10-07 07:48:59.936910] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.101 [2024-10-07 07:48:59.938708] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.101 [2024-10-07 07:48:59.948068] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.101 [2024-10-07 07:48:59.948458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:48:59.948607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:48:59.948619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.101 [2024-10-07 07:48:59.948627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.101 [2024-10-07 07:48:59.948726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.101 [2024-10-07 07:48:59.948872] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.101 [2024-10-07 07:48:59.948882] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.101 [2024-10-07 07:48:59.948889] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.101 [2024-10-07 07:48:59.950746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.101 [2024-10-07 07:48:59.959884] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.101 [2024-10-07 07:48:59.960305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:48:59.960517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:48:59.960528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.101 [2024-10-07 07:48:59.960537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.101 [2024-10-07 07:48:59.960637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.101 [2024-10-07 07:48:59.960772] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.101 [2024-10-07 07:48:59.960781] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.101 [2024-10-07 07:48:59.960788] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.101 [2024-10-07 07:48:59.962410] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.101 [2024-10-07 07:48:59.971783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.101 [2024-10-07 07:48:59.972027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:48:59.972231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:48:59.972244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.101 [2024-10-07 07:48:59.972252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.101 [2024-10-07 07:48:59.972382] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.101 [2024-10-07 07:48:59.972497] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.101 [2024-10-07 07:48:59.972506] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.101 [2024-10-07 07:48:59.972513] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.101 [2024-10-07 07:48:59.974326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.101 [2024-10-07 07:48:59.983726] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.101 [2024-10-07 07:48:59.984154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:48:59.984444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:48:59.984456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.101 [2024-10-07 07:48:59.984464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.101 [2024-10-07 07:48:59.984610] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.101 [2024-10-07 07:48:59.984681] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.101 [2024-10-07 07:48:59.984690] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.101 [2024-10-07 07:48:59.984697] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.101 [2024-10-07 07:48:59.986616] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.101 [2024-10-07 07:48:59.995856] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.101 [2024-10-07 07:48:59.996188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:48:59.996350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:48:59.996362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.101 [2024-10-07 07:48:59.996371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.101 [2024-10-07 07:48:59.996487] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.101 [2024-10-07 07:48:59.996602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.101 [2024-10-07 07:48:59.996611] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.101 [2024-10-07 07:48:59.996622] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.101 [2024-10-07 07:48:59.998642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.101 [2024-10-07 07:49:00.008016] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.101 [2024-10-07 07:49:00.008329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:49:00.008528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:49:00.008540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.101 [2024-10-07 07:49:00.008549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.101 [2024-10-07 07:49:00.008655] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.101 [2024-10-07 07:49:00.008747] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.101 [2024-10-07 07:49:00.008759] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.101 [2024-10-07 07:49:00.008767] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.101 [2024-10-07 07:49:00.010750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.101 [2024-10-07 07:49:00.020103] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.101 [2024-10-07 07:49:00.020564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:49:00.020836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:49:00.020850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.101 [2024-10-07 07:49:00.020859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.101 [2024-10-07 07:49:00.020947] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.101 [2024-10-07 07:49:00.021069] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.101 [2024-10-07 07:49:00.021079] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.101 [2024-10-07 07:49:00.021088] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.101 [2024-10-07 07:49:00.022695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.101 [2024-10-07 07:49:00.032099] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.101 [2024-10-07 07:49:00.032507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:49:00.032716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:49:00.032729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.101 [2024-10-07 07:49:00.032739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.101 [2024-10-07 07:49:00.032854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.101 [2024-10-07 07:49:00.032984] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.101 [2024-10-07 07:49:00.032994] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.101 [2024-10-07 07:49:00.033005] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.101 [2024-10-07 07:49:00.034642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.101 [2024-10-07 07:49:00.044169] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.101 [2024-10-07 07:49:00.044572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:49:00.044721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.101 [2024-10-07 07:49:00.044734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.101 [2024-10-07 07:49:00.044742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.101 [2024-10-07 07:49:00.044859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.101 [2024-10-07 07:49:00.044974] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.101 [2024-10-07 07:49:00.044985] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.102 [2024-10-07 07:49:00.044992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.102 [2024-10-07 07:49:00.046881] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.102 [2024-10-07 07:49:00.056038] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.102 [2024-10-07 07:49:00.056531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-10-07 07:49:00.056818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.102 [2024-10-07 07:49:00.056830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.102 [2024-10-07 07:49:00.056838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.102 [2024-10-07 07:49:00.056956] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.102 [2024-10-07 07:49:00.057027] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.102 [2024-10-07 07:49:00.057035] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.102 [2024-10-07 07:49:00.057043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.102 [2024-10-07 07:49:00.058731] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.362 [2024-10-07 07:49:00.068248] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.362 [2024-10-07 07:49:00.068623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-10-07 07:49:00.068866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-10-07 07:49:00.068880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.362 [2024-10-07 07:49:00.068888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.362 [2024-10-07 07:49:00.069035] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.362 [2024-10-07 07:49:00.069172] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.362 [2024-10-07 07:49:00.069182] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.362 [2024-10-07 07:49:00.069189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.362 [2024-10-07 07:49:00.070906] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.362 [2024-10-07 07:49:00.080332] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.362 [2024-10-07 07:49:00.080615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-10-07 07:49:00.080859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-10-07 07:49:00.080871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.362 [2024-10-07 07:49:00.080880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.362 [2024-10-07 07:49:00.081010] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.362 [2024-10-07 07:49:00.081147] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.362 [2024-10-07 07:49:00.081158] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.362 [2024-10-07 07:49:00.081165] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.362 [2024-10-07 07:49:00.082990] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.362 [2024-10-07 07:49:00.092332] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.362 [2024-10-07 07:49:00.092720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-10-07 07:49:00.092983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.362 [2024-10-07 07:49:00.092995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.362 [2024-10-07 07:49:00.093004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.362 [2024-10-07 07:49:00.093124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.362 [2024-10-07 07:49:00.093255] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.362 [2024-10-07 07:49:00.093266] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.362 [2024-10-07 07:49:00.093273] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.363 [2024-10-07 07:49:00.095115] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.363 [2024-10-07 07:49:00.104402] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.363 [2024-10-07 07:49:00.104794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.105020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.105032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.363 [2024-10-07 07:49:00.105040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.363 [2024-10-07 07:49:00.105147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.363 [2024-10-07 07:49:00.105248] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.363 [2024-10-07 07:49:00.105257] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.363 [2024-10-07 07:49:00.105264] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.363 [2024-10-07 07:49:00.106915] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.363 [2024-10-07 07:49:00.116222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.363 [2024-10-07 07:49:00.116634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.116843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.116855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.363 [2024-10-07 07:49:00.116864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.363 [2024-10-07 07:49:00.116979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.363 [2024-10-07 07:49:00.117100] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.363 [2024-10-07 07:49:00.117110] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.363 [2024-10-07 07:49:00.117117] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.363 [2024-10-07 07:49:00.118707] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.363 [2024-10-07 07:49:00.128031] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.363 [2024-10-07 07:49:00.128442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.128543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.128556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.363 [2024-10-07 07:49:00.128563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.363 [2024-10-07 07:49:00.128693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.363 [2024-10-07 07:49:00.128822] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.363 [2024-10-07 07:49:00.128831] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.363 [2024-10-07 07:49:00.128838] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.363 [2024-10-07 07:49:00.130722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.363 [2024-10-07 07:49:00.139982] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.363 [2024-10-07 07:49:00.140347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.140630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.140642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.363 [2024-10-07 07:49:00.140650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.363 [2024-10-07 07:49:00.140765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.363 [2024-10-07 07:49:00.140895] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.363 [2024-10-07 07:49:00.140906] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.363 [2024-10-07 07:49:00.140913] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.363 [2024-10-07 07:49:00.142682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.363 [2024-10-07 07:49:00.151853] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.363 [2024-10-07 07:49:00.152183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.152349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.152361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.363 [2024-10-07 07:49:00.152369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.363 [2024-10-07 07:49:00.152485] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.363 [2024-10-07 07:49:00.152630] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.363 [2024-10-07 07:49:00.152641] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.363 [2024-10-07 07:49:00.152647] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.363 [2024-10-07 07:49:00.154330] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.363 [2024-10-07 07:49:00.163691] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.363 [2024-10-07 07:49:00.163994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.164182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.164195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.363 [2024-10-07 07:49:00.164204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.363 [2024-10-07 07:49:00.164333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.363 [2024-10-07 07:49:00.164448] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.363 [2024-10-07 07:49:00.164457] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.363 [2024-10-07 07:49:00.164464] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.363 [2024-10-07 07:49:00.166132] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.363 [2024-10-07 07:49:00.175611] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.363 [2024-10-07 07:49:00.176023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.176246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.176259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.363 [2024-10-07 07:49:00.176267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.363 [2024-10-07 07:49:00.176338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.363 [2024-10-07 07:49:00.176453] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.363 [2024-10-07 07:49:00.176462] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.363 [2024-10-07 07:49:00.176469] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.363 [2024-10-07 07:49:00.178177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.363 [2024-10-07 07:49:00.187507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.363 [2024-10-07 07:49:00.187756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.187921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.187933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.363 [2024-10-07 07:49:00.187945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.363 [2024-10-07 07:49:00.188065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.363 [2024-10-07 07:49:00.188166] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.363 [2024-10-07 07:49:00.188174] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.363 [2024-10-07 07:49:00.188181] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.363 [2024-10-07 07:49:00.190019] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.363 [2024-10-07 07:49:00.199459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.363 [2024-10-07 07:49:00.199730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.199941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.199954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.363 [2024-10-07 07:49:00.199962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.363 [2024-10-07 07:49:00.200111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.363 [2024-10-07 07:49:00.200241] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.363 [2024-10-07 07:49:00.200250] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.363 [2024-10-07 07:49:00.200257] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.363 [2024-10-07 07:49:00.202038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.363 [2024-10-07 07:49:00.211360] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.363 [2024-10-07 07:49:00.211681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.211940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.363 [2024-10-07 07:49:00.211952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.364 [2024-10-07 07:49:00.211960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.364 [2024-10-07 07:49:00.212066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.364 [2024-10-07 07:49:00.212196] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.364 [2024-10-07 07:49:00.212205] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.364 [2024-10-07 07:49:00.212212] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.364 [2024-10-07 07:49:00.214097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.364 [2024-10-07 07:49:00.223342] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.364 [2024-10-07 07:49:00.223714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.223925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.223937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.364 [2024-10-07 07:49:00.223945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.364 [2024-10-07 07:49:00.224033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.364 [2024-10-07 07:49:00.224154] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.364 [2024-10-07 07:49:00.224164] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.364 [2024-10-07 07:49:00.224171] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.364 [2024-10-07 07:49:00.226007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.364 [2024-10-07 07:49:00.235352] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.364 [2024-10-07 07:49:00.235740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.235873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.235884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.364 [2024-10-07 07:49:00.235893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.364 [2024-10-07 07:49:00.236008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.364 [2024-10-07 07:49:00.236128] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.364 [2024-10-07 07:49:00.236140] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.364 [2024-10-07 07:49:00.236147] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.364 [2024-10-07 07:49:00.237868] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.364 [2024-10-07 07:49:00.247298] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.364 [2024-10-07 07:49:00.247616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.247771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.247783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.364 [2024-10-07 07:49:00.247792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.364 [2024-10-07 07:49:00.247906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.364 [2024-10-07 07:49:00.248036] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.364 [2024-10-07 07:49:00.248046] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.364 [2024-10-07 07:49:00.248053] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.364 [2024-10-07 07:49:00.249824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.364 [2024-10-07 07:49:00.259211] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.364 [2024-10-07 07:49:00.259561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.259827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.259839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.364 [2024-10-07 07:49:00.259849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.364 [2024-10-07 07:49:00.259980] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.364 [2024-10-07 07:49:00.260090] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.364 [2024-10-07 07:49:00.260100] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.364 [2024-10-07 07:49:00.260108] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.364 [2024-10-07 07:49:00.261918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.364 [2024-10-07 07:49:00.271154] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.364 [2024-10-07 07:49:00.271479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.271742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.271755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.364 [2024-10-07 07:49:00.271763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.364 [2024-10-07 07:49:00.271907] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.364 [2024-10-07 07:49:00.272037] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.364 [2024-10-07 07:49:00.272047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.364 [2024-10-07 07:49:00.272053] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.364 [2024-10-07 07:49:00.273854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.364 [2024-10-07 07:49:00.283052] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.364 [2024-10-07 07:49:00.283342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.283600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.283612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.364 [2024-10-07 07:49:00.283620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.364 [2024-10-07 07:49:00.283735] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.364 [2024-10-07 07:49:00.283879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.364 [2024-10-07 07:49:00.283890] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.364 [2024-10-07 07:49:00.283897] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.364 [2024-10-07 07:49:00.285683] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.364 [2024-10-07 07:49:00.295070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.364 [2024-10-07 07:49:00.295319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.295552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.295564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.364 [2024-10-07 07:49:00.295573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.364 [2024-10-07 07:49:00.295643] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.364 [2024-10-07 07:49:00.295743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.364 [2024-10-07 07:49:00.295755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.364 [2024-10-07 07:49:00.295762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.364 [2024-10-07 07:49:00.297550] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.364 [2024-10-07 07:49:00.306845] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.364 [2024-10-07 07:49:00.307252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.307412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.307424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.364 [2024-10-07 07:49:00.307432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.364 [2024-10-07 07:49:00.307532] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.364 [2024-10-07 07:49:00.307647] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.364 [2024-10-07 07:49:00.307657] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.364 [2024-10-07 07:49:00.307664] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.364 [2024-10-07 07:49:00.309498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.364 07:49:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:56.364 07:49:00 -- common/autotest_common.sh@852 -- # return 0 00:29:56.364 [2024-10-07 07:49:00.318771] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.364 07:49:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:56.364 [2024-10-07 07:49:00.319087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 07:49:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:56.364 [2024-10-07 07:49:00.319296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.364 [2024-10-07 07:49:00.319309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.364 [2024-10-07 07:49:00.319317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.364 [2024-10-07 07:49:00.319418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.364 07:49:00 -- common/autotest_common.sh@10 -- # set +x 00:29:56.365 [2024-10-07 07:49:00.319548] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.365 [2024-10-07 07:49:00.319558] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.365 [2024-10-07 07:49:00.319566] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.365 [2024-10-07 07:49:00.321190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.365 [2024-10-07 07:49:00.330652] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.365 [2024-10-07 07:49:00.330958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-10-07 07:49:00.331118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.365 [2024-10-07 07:49:00.331133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.365 [2024-10-07 07:49:00.331141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.365 [2024-10-07 07:49:00.331242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.365 [2024-10-07 07:49:00.331361] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.365 [2024-10-07 07:49:00.331372] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.365 [2024-10-07 07:49:00.331379] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.625 [2024-10-07 07:49:00.333378] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.625 [2024-10-07 07:49:00.342589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.625 [2024-10-07 07:49:00.342903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.625 [2024-10-07 07:49:00.343069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.625 [2024-10-07 07:49:00.343083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.625 [2024-10-07 07:49:00.343091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.625 [2024-10-07 07:49:00.343221] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.625 [2024-10-07 07:49:00.343321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.625 [2024-10-07 07:49:00.343331] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.625 [2024-10-07 07:49:00.343338] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.625 [2024-10-07 07:49:00.345097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.625 [2024-10-07 07:49:00.354534] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.625 07:49:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.625 [2024-10-07 07:49:00.354798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.625 [2024-10-07 07:49:00.354957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.625 [2024-10-07 07:49:00.354970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.625 [2024-10-07 07:49:00.354978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.625 07:49:00 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:56.625 [2024-10-07 07:49:00.355084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.625 [2024-10-07 07:49:00.355215] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.625 [2024-10-07 07:49:00.355225] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.625 [2024-10-07 07:49:00.355232] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.625 07:49:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.625 07:49:00 -- common/autotest_common.sh@10 -- # set +x 00:29:56.625 [2024-10-07 07:49:00.356894] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.625 [2024-10-07 07:49:00.359841] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.625 07:49:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.625 07:49:00 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:56.625 07:49:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.625 07:49:00 -- common/autotest_common.sh@10 -- # set +x 00:29:56.625 [2024-10-07 07:49:00.366486] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.625 [2024-10-07 07:49:00.366868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.625 [2024-10-07 07:49:00.367027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.625 [2024-10-07 07:49:00.367039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.625 [2024-10-07 07:49:00.367047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.625 [2024-10-07 07:49:00.367153] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.625 [2024-10-07 07:49:00.367283] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.625 [2024-10-07 07:49:00.367294] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.625 [2024-10-07 07:49:00.367301] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.625 [2024-10-07 07:49:00.369202] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.625 [2024-10-07 07:49:00.378394] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.625 [2024-10-07 07:49:00.378779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.625 [2024-10-07 07:49:00.379042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.625 [2024-10-07 07:49:00.379054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.625 [2024-10-07 07:49:00.379067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.625 [2024-10-07 07:49:00.379182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.625 [2024-10-07 07:49:00.379312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.625 [2024-10-07 07:49:00.379322] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.625 [2024-10-07 07:49:00.379328] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.625 [2024-10-07 07:49:00.381099] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.625 [2024-10-07 07:49:00.390432] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.625 [2024-10-07 07:49:00.390715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.625 [2024-10-07 07:49:00.390881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.625 [2024-10-07 07:49:00.390894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.625 [2024-10-07 07:49:00.390902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.625 [2024-10-07 07:49:00.391047] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.625 [2024-10-07 07:49:00.391201] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.625 [2024-10-07 07:49:00.391212] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.625 [2024-10-07 07:49:00.391220] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.625 [2024-10-07 07:49:00.393031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.625 Malloc0 00:29:56.625 07:49:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.625 07:49:00 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:56.625 07:49:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.625 07:49:00 -- common/autotest_common.sh@10 -- # set +x 00:29:56.625 [2024-10-07 07:49:00.402263] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.626 [2024-10-07 07:49:00.402587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.626 [2024-10-07 07:49:00.402819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.626 [2024-10-07 07:49:00.402831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.626 [2024-10-07 07:49:00.402839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.626 [2024-10-07 07:49:00.402939] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.626 [2024-10-07 07:49:00.403025] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.626 [2024-10-07 07:49:00.403034] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.626 [2024-10-07 07:49:00.403041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.626 [2024-10-07 07:49:00.404709] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.626 07:49:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.626 07:49:00 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:56.626 07:49:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.626 07:49:00 -- common/autotest_common.sh@10 -- # set +x 00:29:56.626 [2024-10-07 07:49:00.414241] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.626 [2024-10-07 07:49:00.414509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.626 [2024-10-07 07:49:00.414706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.626 [2024-10-07 07:49:00.414719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b2a0 with addr=10.0.0.2, port=4420 00:29:56.626 [2024-10-07 07:49:00.414726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b2a0 is same with the state(5) to be set 00:29:56.626 [2024-10-07 07:49:00.414842] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b2a0 (9): Bad file descriptor 00:29:56.626 [2024-10-07 07:49:00.414986] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.626 [2024-10-07 07:49:00.414996] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.626 [2024-10-07 07:49:00.415003] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.626 [2024-10-07 07:49:00.416804] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.626 07:49:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.626 07:49:00 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.626 07:49:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.626 07:49:00 -- common/autotest_common.sh@10 -- # set +x 00:29:56.626 [2024-10-07 07:49:00.421468] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.626 07:49:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.626 [2024-10-07 07:49:00.426267] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.626 07:49:00 -- host/bdevperf.sh@38 -- # wait 100426 00:29:56.626 [2024-10-07 07:49:00.572071] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:06.740 00:30:06.740 Latency(us) 00:30:06.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.740 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:06.740 Verification LBA range: start 0x0 length 0x4000 00:30:06.740 Nvme1n1 : 15.00 12792.94 49.97 20137.97 0.00 3875.51 709.97 22094.99 00:30:06.741 =================================================================================================================== 00:30:06.741 Total : 12792.94 49.97 20137.97 0.00 3875.51 709.97 22094.99 00:30:06.741 07:49:09 -- host/bdevperf.sh@39 -- # sync 00:30:06.741 07:49:09 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:06.741 07:49:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:06.741 07:49:09 -- common/autotest_common.sh@10 -- # set +x 00:30:06.741 07:49:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:06.741 07:49:09 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:06.741 07:49:09 -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:06.741 07:49:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:06.741 07:49:09 -- nvmf/common.sh@116 -- # sync 00:30:06.741 07:49:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:06.741 07:49:09 -- nvmf/common.sh@119 -- # set +e 00:30:06.741 07:49:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:06.741 07:49:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:06.741 rmmod nvme_tcp 00:30:06.741 rmmod nvme_fabrics 00:30:06.741 rmmod nvme_keyring 00:30:06.741 07:49:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:06.741 07:49:09 -- nvmf/common.sh@123 -- # set -e 00:30:06.741 07:49:09 -- nvmf/common.sh@124 -- # return 0 00:30:06.741 07:49:09 -- nvmf/common.sh@477 -- # '[' -n 101346 ']' 00:30:06.741 07:49:09 -- nvmf/common.sh@478 -- # killprocess 101346 00:30:06.741 07:49:09 -- common/autotest_common.sh@926 -- # '[' -z 101346 ']' 00:30:06.741 07:49:09 -- common/autotest_common.sh@930 -- # kill -0 101346 00:30:06.741 07:49:09 -- common/autotest_common.sh@931 -- # uname 00:30:06.741 07:49:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:06.741 07:49:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 101346 00:30:06.741 07:49:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:06.741 07:49:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:06.741 07:49:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 101346' 00:30:06.741 killing process with pid 101346 00:30:06.741 07:49:09 -- common/autotest_common.sh@945 -- # kill 101346 00:30:06.741 07:49:09 -- common/autotest_common.sh@950 -- # wait 101346 00:30:06.741 07:49:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:06.741 07:49:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:06.741 07:49:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:06.741 07:49:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:06.741 07:49:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:06.741 07:49:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.741 07:49:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:06.741 07:49:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.678 07:49:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:07.678 00:30:07.678 real 0m26.187s 00:30:07.678 user 1m2.506s 00:30:07.678 sys 0m6.538s 00:30:07.678 07:49:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:07.678 07:49:11 -- common/autotest_common.sh@10 -- # set +x 00:30:07.678 ************************************ 00:30:07.678 END TEST nvmf_bdevperf 00:30:07.678 ************************************ 00:30:07.678 07:49:11 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:07.678 07:49:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:07.679 07:49:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:07.679 07:49:11 -- common/autotest_common.sh@10 -- # set +x 00:30:07.679 ************************************ 00:30:07.679 START TEST nvmf_target_disconnect 00:30:07.679 ************************************ 00:30:07.679 07:49:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:07.679 * Looking for test storage... 00:30:07.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:07.679 07:49:11 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.679 07:49:11 -- nvmf/common.sh@7 -- # uname -s 00:30:07.679 07:49:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.679 07:49:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.679 07:49:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.679 07:49:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.679 07:49:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.679 07:49:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.679 07:49:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.679 07:49:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.679 07:49:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.679 07:49:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.679 07:49:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:07.679 07:49:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:07.679 07:49:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.679 07:49:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.679 07:49:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.679 07:49:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.679 07:49:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.679 07:49:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.679 07:49:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.679 07:49:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.679 07:49:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.679 07:49:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.679 07:49:11 -- paths/export.sh@5 -- # export PATH 00:30:07.679 07:49:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.679 07:49:11 -- nvmf/common.sh@46 -- # : 0 00:30:07.679 07:49:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:07.679 07:49:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:07.679 07:49:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:07.679 07:49:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.679 07:49:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.679 07:49:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:07.679 07:49:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:07.679 07:49:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:07.679 07:49:11 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:07.679 07:49:11 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:07.679 07:49:11 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:07.679 07:49:11 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:30:07.679 07:49:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:07.679 07:49:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.679 07:49:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:07.679 07:49:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:07.679 07:49:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:07.679 07:49:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.679 07:49:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:07.679 07:49:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.679 07:49:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:07.679 07:49:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:07.679 07:49:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:07.679 07:49:11 -- common/autotest_common.sh@10 -- # set +x 00:30:12.954 07:49:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:12.954 07:49:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:12.954 07:49:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:12.954 07:49:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:12.954 07:49:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:12.954 07:49:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:12.954 07:49:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:12.954 07:49:16 -- nvmf/common.sh@294 -- # net_devs=() 00:30:12.954 07:49:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:12.954 07:49:16 -- nvmf/common.sh@295 -- # e810=() 00:30:12.954 07:49:16 -- nvmf/common.sh@295 -- # local -ga e810 00:30:12.954 07:49:16 -- nvmf/common.sh@296 -- # x722=() 00:30:12.954 07:49:16 -- nvmf/common.sh@296 -- # local -ga x722 00:30:12.954 07:49:16 -- nvmf/common.sh@297 -- # mlx=() 00:30:12.954 07:49:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:12.954 07:49:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.954 07:49:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.954 07:49:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.954 07:49:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.954 07:49:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.954 07:49:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.954 07:49:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.954 07:49:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.954 07:49:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.954 07:49:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.954 07:49:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.954 07:49:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:12.954 07:49:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:12.954 07:49:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:12.955 07:49:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:12.955 07:49:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:12.955 07:49:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:12.955 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:12.955 07:49:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:12.955 07:49:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:12.955 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:12.955 07:49:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:12.955 07:49:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:12.955 07:49:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.955 07:49:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:12.955 07:49:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.955 07:49:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:12.955 Found net devices under 0000:af:00.0: cvl_0_0 00:30:12.955 07:49:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.955 07:49:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:12.955 07:49:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.955 07:49:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:12.955 07:49:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.955 07:49:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:12.955 Found net devices under 0000:af:00.1: cvl_0_1 00:30:12.955 07:49:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.955 07:49:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:12.955 07:49:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:12.955 07:49:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:12.955 07:49:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.955 07:49:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.955 07:49:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.955 07:49:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:12.955 07:49:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.955 07:49:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.955 07:49:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:12.955 07:49:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.955 07:49:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.955 07:49:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:12.955 07:49:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:12.955 07:49:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.955 07:49:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.955 07:49:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.955 07:49:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.955 07:49:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:12.955 07:49:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.955 07:49:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.955 07:49:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.955 07:49:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:12.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:30:12.955 00:30:12.955 --- 10.0.0.2 ping statistics --- 00:30:12.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.955 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:30:12.955 07:49:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:30:12.955 00:30:12.955 --- 10.0.0.1 ping statistics --- 00:30:12.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.955 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:30:12.955 07:49:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.955 07:49:16 -- nvmf/common.sh@410 -- # return 0 00:30:12.955 07:49:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:12.955 07:49:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.955 07:49:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:12.955 07:49:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.955 07:49:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:12.955 07:49:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:12.955 07:49:16 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:12.955 07:49:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:12.955 07:49:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:12.955 07:49:16 -- common/autotest_common.sh@10 -- # set +x 00:30:12.955 ************************************ 00:30:12.955 START TEST nvmf_target_disconnect_tc1 00:30:12.955 ************************************ 00:30:12.955 07:49:16 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:30:12.955 07:49:16 -- host/target_disconnect.sh@32 -- # set +e 00:30:12.955 07:49:16 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:12.955 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.955 [2024-10-07 07:49:16.730266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-10-07 07:49:16.730519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-10-07 07:49:16.730531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cbd70 with addr=10.0.0.2, port=4420 00:30:12.955 [2024-10-07 07:49:16.730551] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:12.955 [2024-10-07 07:49:16.730562] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:12.955 [2024-10-07 07:49:16.730569] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:12.955 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:12.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:12.955 Initializing NVMe Controllers 00:30:12.955 07:49:16 -- host/target_disconnect.sh@33 -- # trap - ERR 00:30:12.955 07:49:16 -- host/target_disconnect.sh@33 -- # print_backtrace 00:30:12.955 07:49:16 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:30:12.955 07:49:16 -- common/autotest_common.sh@1132 -- # return 0 00:30:12.955 07:49:16 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:30:12.955 07:49:16 -- host/target_disconnect.sh@41 -- # set -e 00:30:12.955 00:30:12.955 real 0m0.087s 00:30:12.955 user 0m0.034s 00:30:12.955 sys 0m0.052s 00:30:12.955 07:49:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:12.955 07:49:16 -- common/autotest_common.sh@10 -- # set +x 00:30:12.955 ************************************ 00:30:12.955 END TEST nvmf_target_disconnect_tc1 00:30:12.955 ************************************ 00:30:12.955 07:49:16 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:12.955 07:49:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:12.955 07:49:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:12.955 07:49:16 -- common/autotest_common.sh@10 -- # set +x 00:30:12.955 ************************************ 00:30:12.955 START TEST nvmf_target_disconnect_tc2 00:30:12.955 ************************************ 00:30:12.955 07:49:16 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:30:12.955 07:49:16 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:30:12.955 07:49:16 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:12.955 07:49:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:12.955 07:49:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:12.955 07:49:16 -- common/autotest_common.sh@10 -- # set +x 00:30:12.955 07:49:16 -- nvmf/common.sh@469 -- # nvmfpid=106226 00:30:12.955 07:49:16 -- nvmf/common.sh@470 -- # waitforlisten 106226 00:30:12.955 07:49:16 -- common/autotest_common.sh@819 -- # '[' -z 106226 ']' 00:30:12.955 07:49:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.955 07:49:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:12.955 07:49:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.955 07:49:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:12.955 07:49:16 -- common/autotest_common.sh@10 -- # set +x 00:30:12.955 07:49:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:12.955 [2024-10-07 07:49:16.819050] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:12.955 [2024-10-07 07:49:16.819095] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.955 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.955 [2024-10-07 07:49:16.889467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:13.215 [2024-10-07 07:49:16.966586] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:13.215 [2024-10-07 07:49:16.966694] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.215 [2024-10-07 07:49:16.966702] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.215 [2024-10-07 07:49:16.966709] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.215 [2024-10-07 07:49:16.966817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:13.215 [2024-10-07 07:49:16.966924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:13.215 [2024-10-07 07:49:16.967029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:13.215 [2024-10-07 07:49:16.967030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:13.784 07:49:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:13.784 07:49:17 -- common/autotest_common.sh@852 -- # return 0 00:30:13.784 07:49:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:13.784 07:49:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:13.784 07:49:17 -- common/autotest_common.sh@10 -- # set +x 00:30:13.784 07:49:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.784 07:49:17 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:13.784 07:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.784 07:49:17 -- common/autotest_common.sh@10 -- # set +x 00:30:13.784 Malloc0 00:30:13.784 07:49:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.784 07:49:17 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:13.784 07:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.784 07:49:17 -- common/autotest_common.sh@10 -- # set +x 00:30:13.784 [2024-10-07 07:49:17.705089] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.784 07:49:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.784 07:49:17 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:13.784 07:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.784 07:49:17 -- common/autotest_common.sh@10 -- # set +x 00:30:13.784 07:49:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.784 07:49:17 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:13.784 07:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.784 07:49:17 -- common/autotest_common.sh@10 -- # set +x 00:30:13.784 07:49:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.784 07:49:17 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:13.784 07:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.784 07:49:17 -- common/autotest_common.sh@10 -- # set +x 00:30:13.784 [2024-10-07 07:49:17.730120] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.784 07:49:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.784 07:49:17 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:13.784 07:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.784 07:49:17 -- common/autotest_common.sh@10 -- # set +x 00:30:13.784 07:49:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.784 07:49:17 -- host/target_disconnect.sh@50 -- # reconnectpid=106466 00:30:13.784 07:49:17 -- host/target_disconnect.sh@52 -- # sleep 2 00:30:13.784 07:49:17 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:14.043 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.957 07:49:19 -- host/target_disconnect.sh@53 -- # kill -9 106226 00:30:15.957 07:49:19 -- host/target_disconnect.sh@55 -- # sleep 2 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 [2024-10-07 07:49:19.755277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Write completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 [2024-10-07 07:49:19.755473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.957 Read completed with error (sct=0, sc=8) 00:30:15.957 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 [2024-10-07 07:49:19.755666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Write completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 Read completed with error (sct=0, sc=8) 00:30:15.958 starting I/O failed 00:30:15.958 [2024-10-07 07:49:19.755854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.958 [2024-10-07 07:49:19.756176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.756481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.756518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.958 qpair failed and we were unable to recover it. 00:30:15.958 [2024-10-07 07:49:19.756730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.756933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.756965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.958 qpair failed and we were unable to recover it. 00:30:15.958 [2024-10-07 07:49:19.757202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.757453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.757485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.958 qpair failed and we were unable to recover it. 00:30:15.958 [2024-10-07 07:49:19.757729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.758025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.758057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.958 qpair failed and we were unable to recover it. 00:30:15.958 [2024-10-07 07:49:19.758307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.758500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.758532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.958 qpair failed and we were unable to recover it. 00:30:15.958 [2024-10-07 07:49:19.758776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.758968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.759001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.958 qpair failed and we were unable to recover it. 00:30:15.958 [2024-10-07 07:49:19.759184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.759433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.759464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.958 qpair failed and we were unable to recover it. 00:30:15.958 [2024-10-07 07:49:19.759661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.759895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.759926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.958 qpair failed and we were unable to recover it. 00:30:15.958 [2024-10-07 07:49:19.760200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.760453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.760470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.958 qpair failed and we were unable to recover it. 00:30:15.958 [2024-10-07 07:49:19.760625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.760774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.760806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.958 qpair failed and we were unable to recover it. 00:30:15.958 [2024-10-07 07:49:19.761046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.761317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.761350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.958 qpair failed and we were unable to recover it. 00:30:15.958 [2024-10-07 07:49:19.761627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.761922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.761953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.958 qpair failed and we were unable to recover it. 00:30:15.958 [2024-10-07 07:49:19.762245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.762430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.958 [2024-10-07 07:49:19.762473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.958 qpair failed and we were unable to recover it. 00:30:15.958 [2024-10-07 07:49:19.762738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.762933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.762950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.763102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.763260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.763293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.763564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.763729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.763760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.763988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.764251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.764285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.764610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.764845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.764876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.765046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.765245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.765279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.765451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.765676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.765709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.765881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.766045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.766086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.766320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.766504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.766535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.766703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.766872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.766905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.767247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.767474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.767491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.767768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.767915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.767932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.768128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.768358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.768375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.768519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.768734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.768750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.768886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.769010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.769027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.769236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.769399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.769416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.769627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.769828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.769845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.770053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.770260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.770278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.770506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.770646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.770662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.770866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.771008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.771039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.771288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.771508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.771541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.771813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.772126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.772160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.772485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.772701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.772718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.773002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.773177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.773210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.773391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.773623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.773655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.773999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.774318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.774351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.774597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.774834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.774866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.775105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.775330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.775363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.775598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.775769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.775807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.776054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.776353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.776386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.776645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.776888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.959 [2024-10-07 07:49:19.776919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.959 qpair failed and we were unable to recover it. 00:30:15.959 [2024-10-07 07:49:19.777091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.777367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.777384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.777594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.777813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.777830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.778040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.778255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.778273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.778420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.778655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.778687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.778839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.779089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.779123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.779421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.779703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.779719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.779979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.780211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.780228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.780316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.780521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.780554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.780733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.780953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.780985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.781301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.781486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.781504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.781702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.781917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.781949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.782121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.782387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.782405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.782536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.782696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.782728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.782952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.783245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.783291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.783509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.783767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.783812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.784040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.784170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.784203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.784370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.784610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.784641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.784813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.785108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.785140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.785402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.785628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.785659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.785956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.786189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.786221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.786518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.786750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.786782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.786962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.787216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.787249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.787548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.787754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.787770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.787888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.788110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.788143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.788382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.788597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.788629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.788761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.789050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.789072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.789296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.789558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.789590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.789898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.790096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.790129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.790390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.790612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.790644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.790934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.791161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.791179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.791318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.791465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.791483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.960 qpair failed and we were unable to recover it. 00:30:15.960 [2024-10-07 07:49:19.791690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.791861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.960 [2024-10-07 07:49:19.791893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.792126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.792353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.792369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.792567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.792873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.792906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.793100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.793396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.793428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.793615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.793864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.793895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.794129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.794348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.794380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.794624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.794798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.794829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.795010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.795262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.795297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.795586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.795797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.795814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.795938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.796070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.796112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.796289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.796534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.796565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.796751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.796903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.796935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.797124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.797374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.797407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.797596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.797789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.797806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.798017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.798176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.798193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.798324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.798481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.798513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.798687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.798997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.799029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.799334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.799566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.799605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.799832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.800115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.800149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.800318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.800556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.800589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.800821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.801052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.801092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.801342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.801531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.801563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.801883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.802056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.802101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.802330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.802570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.802608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.802906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.803110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.803128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.803344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.803544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.803561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.803708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.803910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-10-07 07:49:19.803942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-10-07 07:49:19.804267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.804516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.804549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.804804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.805052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.805094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.805391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.805679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.805711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.806005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.806245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.806278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.806600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.806831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.806848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.806992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.807298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.807331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.807511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.807685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.807717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.808008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.808187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.808221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.808396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.808540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.808557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.808851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.809086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.809120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.809357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.809574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.809590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.809745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.809917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.809950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.810180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.810341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.810373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.811075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.811328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.811347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.811486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.811691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.811708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.811916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.812171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.812188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.812463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.812692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.812708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.812853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.813128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.813146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.813369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.813661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.813696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.814012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.814238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.814271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.814431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.814638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.814670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.814905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.815098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.815133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.815453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.815698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.815715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.815904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.816055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.816099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.816293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.816468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.816500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.816754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.817016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.817048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.817221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.817445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.817479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.817640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.817804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.817837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.818080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.818305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.818338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.818581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.818800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.818832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.819000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.819242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.819278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-10-07 07:49:19.819537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-10-07 07:49:19.819715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.819753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.820048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.820290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.820321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.820493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.820660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.820693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.820927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.821071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.821104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.821427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.821714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.821746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.821936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.822156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.822189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.822363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.822488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.822505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.822804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.823073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.823106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.823357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.823538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.823555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.824426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.825626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.825661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.825921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.826145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.826180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.826463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.826618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.826651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.826928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.827237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.827270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.827429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.827669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.827701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.827930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.828167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.828201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.828501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.828656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.828688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.828997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.829178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.829211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.829532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.829826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.829858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.830040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.830220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.830253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.830494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.830729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.830745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.830946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.831180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.831213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.831495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.831724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.831755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.832041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.832274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.832292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.832488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.832632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.832648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.832917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.833202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.833236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.833489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.833723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.833754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.833993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.834186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.834220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.834385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.834618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.834651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.834967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.836323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.836355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.836576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.836781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.836799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.837014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.837240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.837258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.837512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.837786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.837804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-10-07 07:49:19.837959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-10-07 07:49:19.838171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.838189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.838337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.838533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.838550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.838702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.838830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.838848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.839051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.839258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.839277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.839511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.839716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.839734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.839931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.840140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.840159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.840357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.840512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.840529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.840661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.840858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.840875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.842316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.842613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.842633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.842794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.843038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.843100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.843402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.843643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.843661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.843786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.844002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.844033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.844210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.844384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.844415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.844609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.844898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.844931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.845112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.845849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.845876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.846105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.846362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.846394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.846634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.846810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.846842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.847015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.847275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.847310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.847537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.847702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.847735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.847890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.848107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.848148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.848332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.848558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.848590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.848779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.848944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.848961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.849105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.849262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.849295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.849438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.849723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.849754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.849914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.850148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.850181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.850355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.850520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.850551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.850702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.850809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.850826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.850961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.851095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.851112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.851245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.851469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.851500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.851678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.851865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.851897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.852092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.852218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.852235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.852429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.852556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-10-07 07:49:19.852573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-10-07 07:49:19.852777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.853093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.853128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.853377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.853614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.853647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.853808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.853975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.854007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.854203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.854342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.854382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.854556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.854806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.854838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.855098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.855244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.855261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.855482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.855703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.855737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.855969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.856194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.856229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.856394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.856576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.856608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.856788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.856988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.857020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.857301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.857534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.857565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.857763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.857979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.858010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.858178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.858338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.858369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.858594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.858760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.858791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.858948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.859189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.859222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.859383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.859568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.859600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.859763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.859875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.859892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.860022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.860231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.860249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.860404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.860572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.860605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.860783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.861077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.861110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.861297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.862239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.862269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.862419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.862619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.862636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.862832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.862955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.862987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.863163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.863322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.863354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.863589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.863809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.863841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.864031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.864206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.864239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.864463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.864732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.864764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.864930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.865095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.865129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.865301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.865553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.865585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-10-07 07:49:19.865756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.865918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-10-07 07:49:19.865950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.866139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.866359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.866392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.866557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.866763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.866795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.866972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.867103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.867138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.867321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.867484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.867516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.867756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.867920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.867952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.868128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.868421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.868452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.868620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.868772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.868803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.869019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.869156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.869173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.869324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.869484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.869522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.869698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.869918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.869950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.870122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.870279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.870312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.870544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.870782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.870813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.870974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.871155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.871190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.871430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.871666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.871699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.871938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.872091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.872125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.872292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.872443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.872474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.872694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.872855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.872887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.873050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.873213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.873245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.873418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.873643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.873676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.873935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.874099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.874132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.874288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.874472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.874504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.874662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.874790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.874807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.874947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.875162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.875197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.875379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.875533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.875566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.875740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.875892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.875923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.876077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.876314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.876346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.876581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.876820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.876837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.877038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.877199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.877217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.877357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.877555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.877572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-10-07 07:49:19.877770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.877906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-10-07 07:49:19.877923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.878066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.878220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.878236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.878360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.878579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.878611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.878813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.879026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.879090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.879336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.880203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.880231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.880469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.880666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.880683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.882011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.882276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.882316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.882499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.882753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.882785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.883099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.883318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.883350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.883540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.883780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.883812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.884043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.884217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.884250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.884491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.884755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.884787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.885021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.885266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.885299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.885617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.885862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.885879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.886759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.886980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.886999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.887198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.887487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.887520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.887713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.887846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.887879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.888150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.888312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.888344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.888613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.888788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.888805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.888950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.889182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.889216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.889397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.890443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.890476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.890726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.891018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.891051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.891310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.891533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.891565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.891805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.892045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.892087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.892325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.892552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.892583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.892870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.893040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.893083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.893267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.893434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.893466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.893647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.893829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.893862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.894081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.894228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.894261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.895100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.895340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.895364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.895586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.895789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.895808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.896036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.896345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.896363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-10-07 07:49:19.896491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-10-07 07:49:19.896697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.896713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.896850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.897079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.897096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.897326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.897454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.897469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.897614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.897754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.897771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.897873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.898071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.898089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.898325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.898534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.898551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.898756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.898895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.898928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.899112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.899348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.899382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.899646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.899815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.899847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.900130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.900368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.900400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.900540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.900709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.900742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.900908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.901131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.901164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.901366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.901627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.901659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.901808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.902016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.902048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.902219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.902398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.902429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.902769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.902999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.903032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.903291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.903546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.903577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.903810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.904051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.904102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.904341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.904514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.904546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.904826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.905084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.905102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.905335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.905487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.905504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.905639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.905824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.905857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.906171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.906416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.906449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.906691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.906908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.906940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.907174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.907487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.907520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.907744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.908006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.908039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.908229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.908486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.908518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.909687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.909994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.910014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.910157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.910354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.910386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.910560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.910832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.910876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.911013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.911157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.911175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.911440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.911661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-10-07 07:49:19.911693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-10-07 07:49:19.911961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.912143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.912178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.912424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.912599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.912630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.912868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.913040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.913082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.913259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.913509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.913543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.913701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.913850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.913882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.914114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.914299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.914331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.914664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.914808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.914825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.915087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.915289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.915308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.915451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.915594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.915610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.915748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.915951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.915969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.916204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.916403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.916420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.916549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.916705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.916722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.916922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.917179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.917196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.917400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.917551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.917568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.917701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.917894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.917912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.918050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.918200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.918218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.918348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.918562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.918580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.918709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.918835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.918851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.918957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.919106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.919124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.919254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.919356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.919372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.919632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.919734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.919751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.919875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.920049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-10-07 07:49:19.920080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-10-07 07:49:19.920272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.920479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.920496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.920698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.920904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.920920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.921056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.921194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.921210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.921353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.921503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.921520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.921648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.921778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.921794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.922000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.922205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.922223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.922422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.922579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.922596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.922784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.922915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.922932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.923143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.923337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.923354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.923612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.923712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.923729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.923870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.924009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.924026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.924249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.924385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.924401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.924664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.924854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.924870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.925068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.925200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.925217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.925375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.925488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.925505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.925635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.925778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.925795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.925991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.926135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.926152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.926372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.926567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.926600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.926781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.927014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.927044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.927224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.927456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.927486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.927713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.927895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.927926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.928106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.928241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.928273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.928519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.928681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.928711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.928875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.928962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.928977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.929102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.929229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.929244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.929453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.929662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.929692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.929948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.930072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.930110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.930314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.930524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.930539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.250 qpair failed and we were unable to recover it. 00:30:16.250 [2024-10-07 07:49:19.930731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.250 [2024-10-07 07:49:19.930924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.930939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.931174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.931315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.931330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.931553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.931761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.931777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.931987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.932209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.932225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.932370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.932507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.932522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.932675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.932823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.932838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.933069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.933310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.933342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.933593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.933767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.933798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.934047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.934210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.934247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.934495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.934777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.934791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.934995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.935146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.935162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.935318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.935538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.935568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.935839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.936037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.936105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.936338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.936560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.936591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.936836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.937070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.937102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.937234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.937413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.937445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.937675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.937851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.937883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.938154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.938445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.938476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.938641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.938847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.938877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.939070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.939340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.939371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.939638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.939901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.939931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.940168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.940300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.940330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.940557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.940724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.940755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.940978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.941273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.941306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.941632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.941799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.941814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.941968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.942093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.942108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.942327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.942588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.942604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.942751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.942952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.942967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.943116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.943320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.943335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.251 qpair failed and we were unable to recover it. 00:30:16.251 [2024-10-07 07:49:19.943534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.251 [2024-10-07 07:49:19.943666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.943681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.943818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.944048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.944089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.944345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.944499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.944537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.944667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.944864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.944894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.945108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.945368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.945399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.945626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.945915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.945930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.946090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.946290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.946321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.946499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.946656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.946686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.946928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.947131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.947163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.947380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.947548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.947579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.947759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.947965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.947997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.948240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.948408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.948439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.948617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.948783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.948814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.948969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.949191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.949207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.949468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.949632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.949661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.949893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.950090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.950123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.950298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.950559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.950591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.950848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.951019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.951050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.951235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.951520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.951550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.951699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.951857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.951887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.952046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.952288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.952326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.952555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.952792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.952823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.952996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.953141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.953174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.953442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.953608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.953639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.953859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.954069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.954102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.954368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.954527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.954557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.954735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.954867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.954882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.955033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.955227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.955242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.955448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.955613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.955643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.252 qpair failed and we were unable to recover it. 00:30:16.252 [2024-10-07 07:49:19.955866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.252 [2024-10-07 07:49:19.956036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.956075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.956226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.956378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.956408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.956663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.956886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.956901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.957041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.957289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.957321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.957642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.957822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.957852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.958023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.958219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.958251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.958421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.958655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.958685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.958861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.958990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.959004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.959218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.959368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.959398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.959582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.959701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.959731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.959954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.960079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.960095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.960234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.960372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.960387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.960526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.960716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.960731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.960861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.960990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.961005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.961159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.961311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.961342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.961519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.961785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.961817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.961990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.962175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.962207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.962453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.962670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.962701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.962876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.963031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.963046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.963247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.963450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.963489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.963815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.963983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.964013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.964333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.964510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.964541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.964723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.964960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.964991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.965228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.965483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.965515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.965685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.965860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.965891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.966064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.966214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.966229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.966429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.966702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.966733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.966967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.967138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.967155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.967377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.967566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.967581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.253 [2024-10-07 07:49:19.967748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.968044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.253 [2024-10-07 07:49:19.968082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.253 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.968264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.968427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.968458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.968610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.968822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.968837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.969096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.969353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.969384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.969617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.969908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.969923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.970063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.970255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.970270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.970418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.970676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.970691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.970908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.971095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.971111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.971369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.971575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.971605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.971831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.972019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.972050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.972297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.972469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.972500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.972677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.972814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.972830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.972958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.973085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.973101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.973243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.973489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.973525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.973706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.973866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.973896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.974143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.974416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.974431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.974566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.974791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.974822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.975057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.975299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.975330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.975514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.975746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.975776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.975964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.976183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.976215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.976351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.976587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.976618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.976798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.977004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.977035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.977287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.977449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.977479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.977662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.977839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.977869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.978054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.978231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.978246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-10-07 07:49:19.978394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.978581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-10-07 07:49:19.978596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.978747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.978940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.978970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.979279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.979443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.979473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.979656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.979833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.979863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.980035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.980273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.980290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.980529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.980823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.980853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.981151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.981383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.981413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.981602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.981719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.981748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.982001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.982238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.982254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.982387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.982525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.982558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.982736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.982956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.982986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.983149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.983366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.983397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.983539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.983826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.983856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.984023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.984174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.984219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.984374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.984709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.984739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.984919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.985124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.985157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.985332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.985490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.985520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.985745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.985928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.985958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.986199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.986365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.986394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.986649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.986915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.986946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.987262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.987510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.987541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.987696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.987869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.987901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.988111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.988398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.988427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.988659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.988874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.988889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.989030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.989255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.989271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.989420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.989508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.989523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.989656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.989847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.989878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.990081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.990351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.990382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.990632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.990804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.990834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.990999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.991288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.991327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-10-07 07:49:19.991578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-10-07 07:49:19.991801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.991832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.992004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.992230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.992261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.992529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.992691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.992720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.993018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.993206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.993222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.993413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.993551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.993566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.993767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.993903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.993934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.994186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.994445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.994477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.994745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.995021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.995052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.995251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.995478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.995508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.995705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.995933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.995968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.996217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.996505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.996536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.996742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.996872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.996901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.997165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.997292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.997322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.997496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.997802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.997832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.998006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.998231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.998262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.998433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.998606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.998637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.998905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.999080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.999096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.999232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.999394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.999424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.999592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.999731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:19.999761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:19.999917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.000067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.000083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:20.000298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.000557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.000572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:20.000806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.000943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.000973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:20.001222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.001392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.001423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:20.001615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.001780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.001810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:20.001953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.002117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.002133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:20.002279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.002486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.002502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:20.002634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.002831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.002847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:20.002996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.003102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.003119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:20.003254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.003412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.003428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-10-07 07:49:20.003694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.003904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-10-07 07:49:20.003920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.004053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.004196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.004213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.004360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.004549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.004564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.004767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.004979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.004995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.005122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.005272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.005287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.005419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.005548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.005564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.005691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.005947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.005962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.006167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.006321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.006337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.006487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.006690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.006706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.006903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.007033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.007049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.007211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.007424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.007439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.007578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.007715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.007730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.007871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.008026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.008041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.008149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.008279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.008295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.008424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.008571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.008585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.008742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.008930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.008945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.009175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.009296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.009312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.009507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.009664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.009680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.009823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.009958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.009973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.010084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.010230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.010244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.010339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.010490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.010504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.010631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.010771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.010789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.010957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.011202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.011218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.011388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.011535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.011550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.011703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.011843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.011859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.011996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.012154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.012170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.012347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.012533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.012547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.012764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.012973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.012988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.013202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.013405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.013420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-10-07 07:49:20.013563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.013850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-10-07 07:49:20.013893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.014175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.014309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.014325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.014469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.014617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.014633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.014858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.014997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.015013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.015170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.015358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.015373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.015520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.015736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.015752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.015893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.016035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.016051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.016204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.016301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.016317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.016505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.016698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.016714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.016841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.016982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.016997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.017202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.017382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.017397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.017606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.017738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.017753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.017924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.018116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.018132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.018326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.018464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.018480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.018630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.018764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.018780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.018901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.019095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.019112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.019253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.019385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.019401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.019666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.019787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.019803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.019936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.020054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.020075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.020232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.020350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.020365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.020496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.020707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.020722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.020956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.021097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.021113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.021309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.021437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.021452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.021650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.021853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.021868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.022018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.022207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.022223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.022354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.022555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.022571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.022668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.022792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.022808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.023005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.023156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.023173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.023298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.023555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.023570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-10-07 07:49:20.023771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.023965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-10-07 07:49:20.023980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.024131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.024260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.024275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.024398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.024532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.024548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.024680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.024822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.024837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.024962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.025169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.025186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.025393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.025588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.025603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.025849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.026005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.026020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.026152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.026351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.026367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.026542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.026671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.026686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.026820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.026979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.026995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.027211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.027416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.027431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.027560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.027795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.027811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.027956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.028154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.028170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.028403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.028522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.028538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.028733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.028860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.028881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.029091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.029282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.029297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.029405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.029558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.029573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.029774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.029970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.029985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.030118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.030228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.030244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.030435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.030674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.030689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.030921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.031052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.031075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.031221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.031419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.031434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.031698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.031830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.031845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.032078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.032270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.032286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.032490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.032627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.032642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.032797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.033099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.033115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-10-07 07:49:20.033253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.033456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-10-07 07:49:20.033471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.033615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.033758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.033773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.033977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.034133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.034156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.034311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.034456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.034473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.034641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.034770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.034789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.034934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.035100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.035116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.035266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.035565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.035580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.035719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.035864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.035880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.036011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.036133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.036149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.036303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.036467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.036483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.036648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.036773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.036788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.036980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.037137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.037153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.037292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.037424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.037439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.037585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.037743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.037758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.037851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.038028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.038043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.038250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.038394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.038409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.038650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.038795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.038810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.038954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.039080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.039096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.039293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.039426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.039442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.039597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.039724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.039740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.039877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.040071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.040087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.040290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.040492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.040508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.040653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.040781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.040796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.040917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.041168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.041185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.041389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.041521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.041535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.041739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.041867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.041882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.042082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.042243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.042258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.042464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.042681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.042696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.042831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.042981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.042996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-10-07 07:49:20.043139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.043284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-10-07 07:49:20.043302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.043437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.043658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.043673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.043878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.044004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.044019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.044213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.044336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.044351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.044534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.044666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.044681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.044877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.045153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.045170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.045319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.045543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.045558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.045703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.045902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.045920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.046125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.046252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.046268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.046405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.046536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.046550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.046744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.046935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.046953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.047041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.047260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.047277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.047487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.047661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.047676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.047899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.048034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.048049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.048249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.048440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.048455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.048676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.048923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.048938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.049077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.049264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.049279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.049425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.049692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.049707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.049900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.050036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.050051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.050209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.050428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.050444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.050647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.050777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.050793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.051055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.051191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.051207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.051355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.051580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.051594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.051718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.052034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.052050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.052178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.052432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.052446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.052595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.052735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.052750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.052889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.053043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.053065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.053275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.053468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.053484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.053680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.053833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.053848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.054069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.054198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.054213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-10-07 07:49:20.054341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-10-07 07:49:20.054540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.054555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.054753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.054888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.054903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.055134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.055280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.055295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.055561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.055687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.055702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.055855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.056004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.056019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.056158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.056427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.056443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.056642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.056824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.056839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.056968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.057053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.057073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.057290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.057492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.057508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.057660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.057889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.057903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.058108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.058326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.058341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.058468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.058705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.058720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.058920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.059073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.059089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.059300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.059439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.059455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.059676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.059804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.059820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.059992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.060131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.060147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.060391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.060530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.060545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.060692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.060893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.060909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.061168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.061291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.061307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.061444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.061593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.061608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.061747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.061949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.061965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.062188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.062330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.062348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.062491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.062680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.062695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.062887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.063006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.063022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.063234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.063444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.063460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.063736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.063945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.063960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.064110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.064319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.064335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.064458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.064717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.064732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.064833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.064970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.064986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-10-07 07:49:20.065151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.065338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-10-07 07:49:20.065354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.065458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.065594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.065609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.065806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.066078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.066093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.066230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.066423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.066438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.066584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.066719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.066734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.066979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.067121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.067137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.067239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.067471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.067486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.067628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.067811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.067827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.067972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.068141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.068157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.068354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.068488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.068503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.068652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.068790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.068805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.068906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.069078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.069095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.069320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.069461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.069477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.069688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.069842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.069857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.070070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.070190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.070206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.070442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.070628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.070643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.070789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.070930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.070946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.071146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.071290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.071305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.071441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.071566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.071582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.071713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.071851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.071866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.072042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.072257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.072273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.072491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.072647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.072662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.072877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.073145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.073162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.073374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.073563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.073578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.073780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.073914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.073930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.074083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.074299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.074314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.074592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.074792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-10-07 07:49:20.074807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-10-07 07:49:20.074930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.075152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.075169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.075308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.075456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.075471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.075601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.075735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.075750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.075898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.076018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.076043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.076251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.076374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.076389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.076541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.076681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.076696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.076901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.077103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.077119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.077262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.077415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.077431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.077555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.077843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.077858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.077986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.078120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.078135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.078341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.078461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.078476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.078619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.078746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.078761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.078897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.079032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.079046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.079189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.079373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.079389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.079531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.079658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.079674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.079956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.080091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.080107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.080325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.080516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.080534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.080704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.080893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.080909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.081193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.081413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.081429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.081544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.081680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.081695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.081825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.081949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.081965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.082246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.082346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.082362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.082502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.082705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.082720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.082861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.083069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.083085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.083184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.083314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.083329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.083522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.083724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.083739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.083881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.084075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.084091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.084229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.084377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.084392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.084647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.084837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.084853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-10-07 07:49:20.085136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-10-07 07:49:20.085279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.085295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.085428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.085565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.085584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.085789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.085992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.086008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.086163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.086288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.086303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.086445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.086681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.086709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.086891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.087041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.087073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.087318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.087465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.087482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.087606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.087728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.087743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.087908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.088033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.088048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.088176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.088308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.088323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.088457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.088579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.088594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.088745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.088937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.088952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.089094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.089286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.089302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.089506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.089643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.089659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.089911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.090033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.090049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.090285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.090417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.090432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.090643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.090777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.090793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.090932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.091134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.091150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.091286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.091493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.091509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.091652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.091783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.091799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.091930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.092146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.092162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.092367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.092601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.092617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.092744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.092890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.092905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.093137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.093332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.093347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.093486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.093635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.093652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.093835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.093965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.093980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.094174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.094310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.094325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.094518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.094715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.094730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.094882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.094995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.095012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-10-07 07:49:20.095205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.095360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-10-07 07:49:20.095375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.095528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.095660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.095676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.095883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.096141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.096158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.096433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.096563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.096579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.096720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.096849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.096865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.097011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.097151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.097167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.097298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.097439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.097454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.097665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.097794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.097810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.097955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.098138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.098154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.098350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.098496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.098514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.098706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.098893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.098909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.099109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.099340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.099356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.099491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.099618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.099634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.099823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.100012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.100028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.100222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.100369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.100384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.100587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.100796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.100811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.101041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.101242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.101259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.101453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.101664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.101679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.101820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.101946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.101961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.102100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.102303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.102318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.102526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.102666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.102682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.102876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.103006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.103022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.103230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.103431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.103447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.103593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.103733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.103749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.103875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.103996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.104012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.104228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.104433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.104449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.104581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.104710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.104726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.104916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.105133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.105150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.105292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.105553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.105568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-10-07 07:49:20.105703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.105801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-10-07 07:49:20.105816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.105958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.106088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.106105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.106308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.106581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.106597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.106719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.106849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.106865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.107094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.107238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.107253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.107356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.107498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.107513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.107714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.107858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.107874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.108070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.108199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.108216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.108340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.108475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.108491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.108629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.108768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.108784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.108994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.109130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.109146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.109274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.109412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.109427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.109637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.109895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.109911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.110108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.110243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.110259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.110453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.110579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.110594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.110726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.110864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.110879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.111015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.111225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.111242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.111363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.111547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.111564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.111692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.111836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.111852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.111990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.112124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.112140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.112279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.112425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.112440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.112576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.112769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.112787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.112915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.113131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.113147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.113309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.113519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.113535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.113659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.113805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.113821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.114033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.114167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.114183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.114330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.114459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.114474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.114670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.114810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.114826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.114971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.115104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.115121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.115312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.115549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-10-07 07:49:20.115565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-10-07 07:49:20.115829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.115973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.115989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.116119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.116343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.116358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.116498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.116686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.116701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.116836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.117022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.117037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.117234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.117327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.117342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.117519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.117675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.117691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.117894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.118082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.118098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.118238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.118369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.118384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.118485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.118611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.118626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.118761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.118881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.118897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.119165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.119378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.119393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.119530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.119661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.119676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.119841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.119998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.120014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.120161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.120348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.120364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.120601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.120727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.120743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.120899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.121093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.121109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.121249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.121378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.121394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.121581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.121803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.121818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.122021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.122221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.122236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.122462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.122589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.122605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.122795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.122998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.123013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.123150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.123301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.123317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.123476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.123684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.123700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.123837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.124017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.124032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.124189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.124328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.124343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.124501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.124770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.124786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-10-07 07:49:20.124936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.125072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-10-07 07:49:20.125089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.125345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.125475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.125491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.125615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.125856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.125871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.125989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.126180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.126195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.126393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.126668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.126684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.126776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.126928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.126944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.127092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.127232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.127248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.127473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.127591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.127606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.127805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.127956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.127972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.128118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.128314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.128329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.128460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.128593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.128608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.128801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.129000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.129015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.129225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.129350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.129366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.129499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.129657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.129672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.129814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.129946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.129961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.130186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.130325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.130341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.130465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.130558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.130577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.130773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.130858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.130874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.131110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.131256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.131272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.131411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.131544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.131559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.131747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.131977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.131994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.132118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.132250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.132265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.132458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.132578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.132593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.132785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.132976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.132991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.133252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.133409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.133424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.133612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.133744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.133759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.133882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.134047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.134066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.134216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.134356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.134373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.134509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.134709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.134724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-10-07 07:49:20.134926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-10-07 07:49:20.135052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.135073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.135266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.135400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.135415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.135620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.135813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.135829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.136029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.136178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.136194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.136324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.136511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.136527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.136655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.136784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.136799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.136948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.137144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.137160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.137300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.137503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.137518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.137680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.137814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.137830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.137941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.138072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.138088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.138214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.138408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.138423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.138617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.138750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.138765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.138972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.139191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.139206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.139340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.139470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.139485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.139689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.139897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.139913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.140097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.140300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.140315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.140445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.140616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.140631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.140739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.140927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.140943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.141095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.141237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.141253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.141406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.141538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.141554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.141707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.141832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.141847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.141988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.142176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.142193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.142475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.142673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.142688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.142818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.142954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.142969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.143162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.143416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.143432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.143631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.143825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.143840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.143983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.144184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.144200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.144321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.144449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.144464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.144606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.144739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.144757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-10-07 07:49:20.144885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.145079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-10-07 07:49:20.145095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.145226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.145323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.145339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.145474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.145675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.145690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.145826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.145955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.145971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.146165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.146302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.146317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.146425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.146559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.146575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.146766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.146967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.146983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.147109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.147230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.147245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.147443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.147580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.147595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.147737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.147870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.147888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.148097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.148228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.148244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.148368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.148486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.148501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.148650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.148771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.148787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.148903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.149041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.149057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.149256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.149432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.149448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.149586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.149769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.149784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.150041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.150154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.150170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.150312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.150553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.150569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.150790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.150907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.150922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.151181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.151362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.151377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.151522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.151654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.151669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.151885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.152012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.152028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.152163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.152358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.152373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.152494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.152628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.152644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.152767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.152968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.152983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.153134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.153291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.153307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.153449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.153576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.153592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.153793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.153927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.153942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.154119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.154319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.154334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-10-07 07:49:20.154466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.154657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-10-07 07:49:20.154673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.154824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.154964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.154978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.155116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.155229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.155240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.155380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.155572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.155583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.155700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.155817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.155828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.155943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.156073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.156084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.156208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.156339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.156349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.156460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.156641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.156651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.156760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.156875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.156886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.157065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.157196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.157206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.157336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.157522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.157532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.157675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.157785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.157796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.157923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.158044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.158054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.158181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.158315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.158325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.158451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.158579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.158590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.158709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.158822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.158832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.158940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.159145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.159156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.159285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.159409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.159419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.159542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.159736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.159747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.159867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.159994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.160004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.160196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.160313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.160323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.160434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.160547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.160557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.160735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.160913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.160923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.161106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.161294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.161304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.161420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.161611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.161621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.161747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.161842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.161852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.161991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.162109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-10-07 07:49:20.162122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-10-07 07:49:20.162343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.162466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.162476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.162588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.162710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.162721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.162847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.162975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.162985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.163108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.163217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.163227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.163340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.163470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.163481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.163597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.163675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.163685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.163800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.163932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.163942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.164055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.164189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.164200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.164310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.164439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.164449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.164557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.164739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.164749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.164866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.164999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.165009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.165263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.165374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.165384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.165566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.165672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.165682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.165817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.166003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.166013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.166143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.166331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.166341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.166457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.166582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.166592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.166782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.166908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.166919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.167045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.167181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.167192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.167446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.167563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.167573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.167686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.167864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.167874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.167996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.168116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.168127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.168254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.168375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.168385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.168501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.168616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.168627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.168749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.168890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.168900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.169020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.169143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.169154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.169274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.169391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.169401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.169580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.169697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.169708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.169836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.169950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.169961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.170087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.170197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-10-07 07:49:20.170207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-10-07 07:49:20.170342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.170455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.170465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.170580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.170718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.170729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.170895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.171083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.171093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.171207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.171321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.171331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.171454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.171565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.171575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.171708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.171888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.171900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.172163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.172275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.172285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.172466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.172657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.172668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.172794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.172919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.172929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.173047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.173177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.173188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.173311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.173435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.173445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.173571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.173702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.173712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.173825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.174008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.174018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.174143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.174263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.174273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.174395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.174576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.174586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.174779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.174963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.174975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.175224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.175410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.175420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.175536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.175767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.175777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.175904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.176096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.176106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.176223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.176321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.176331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.176517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.176627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.176638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.176754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.176942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.176953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.177126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.177314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.177325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.177442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.177575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.177585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.177769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.177878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.177888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.178092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.178238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.178251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.178465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.178585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.178596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.178715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.178961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.178972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.179176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.179291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-10-07 07:49:20.179301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-10-07 07:49:20.179471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.179607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.179617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.179738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.179849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.179860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.179983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.180100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.180110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.180291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.180429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.180439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.180556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.180757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.180767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.180922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.181053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.181077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.181200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.181345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.181359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.181537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.181653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.181663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.181785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.181969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.181980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.182145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.182276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.182287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.182410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.182605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.182616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.182757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.182935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.182946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.183130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.183271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.183282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.183477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.183661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.183672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.183798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.183987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.183997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.184108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.184248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.184258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.184439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.184565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.184577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.184785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.184909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.184920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.185124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.185273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.185283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.185412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.185543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.185553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.185795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.185948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.185959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.186090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.186215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.186225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.186334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.186512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.186522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.186714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.186829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.186839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.186984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.187171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.187182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.187381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.187499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.187509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.187687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.187815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.187825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.188019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.188134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.188145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.188262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.188382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-10-07 07:49:20.188393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-10-07 07:49:20.188520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.188632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.188642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.188855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.188984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.188995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.189123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.189313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.189323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.189460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.189582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.189593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.189791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.189933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.189943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.190134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.190251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.190261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.190385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.190563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.190573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.190701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.190813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.190823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.190938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.191055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.191072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.191171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.191323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.191333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.191511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.191699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.191709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.191888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.192001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.192011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.192134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.192325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.192336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.192584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.192714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.192724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.192836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.192957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.192967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.193091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.193213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.193223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.193352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.193464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.193474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.193606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.193723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.193733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.193855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.193973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.193983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.194107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.194303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.194314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.194492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.194623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.194645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.194778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.194904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.194915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.195035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.195153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.195164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.195281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.195403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.195413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.195545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.195627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.195638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.195749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.195935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.195946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.196144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.196260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.196271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.196395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.196522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.196532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-10-07 07:49:20.196685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.196813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-10-07 07:49:20.196823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-10-07 07:49:20.197006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.197132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.197143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-10-07 07:49:20.197257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.197365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.197376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-10-07 07:49:20.197557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.197667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.197678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-10-07 07:49:20.197862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.197982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.197993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-10-07 07:49:20.198168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.198352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.198363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-10-07 07:49:20.198473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.198598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.198609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-10-07 07:49:20.198716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.198833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.198843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-10-07 07:49:20.198965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.199090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.199101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-10-07 07:49:20.199228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.199344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.199355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-10-07 07:49:20.199474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.199620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-10-07 07:49:20.199631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.199812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.199932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.199943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.200071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.200182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.200193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.200409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.200586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.200596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.200717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.200849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.200859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.200978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.201165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.201177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.201303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.201481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.201492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.201690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.201835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.201846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.201978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.202146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.202157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.202348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.202461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.202471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.202598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.202733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.202744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.202949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.203070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.203081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.203293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.203468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.203479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.203669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.203805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.203816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.204002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.204110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.204122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.204235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.204375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.204387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.204568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.204692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.204703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.204825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.204955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.204965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.205111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.205293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.205304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.205418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.205553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.205564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.205696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.205892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.205902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.206101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.206291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.206301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.206414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.206535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.206546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.206665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.206790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.206801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.546 [2024-10-07 07:49:20.206925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.207056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.546 [2024-10-07 07:49:20.207070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.546 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.207204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.207316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.207326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.207444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.207571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.207582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.207706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.207886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.207897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.208093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.208272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.208283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.208399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.208479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.208502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.208643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.208768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.208779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.208957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.209094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.209105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.209247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.209376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.209386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.209516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.209697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.209707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.209834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.209950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.209960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.210091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.210223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.210233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.210436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.210631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.210641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.210761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.210942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.210953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.211079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.211288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.211298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.211424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.211617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.211629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.211827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.212005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.212015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.212265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.212454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.212465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.212677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.212834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.212845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.212959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.213080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.213102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.213228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.213345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.213355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.213539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.213719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.213729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.213887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.213999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.214009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.214236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.214347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.214357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.214482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.214671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.214681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.214757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.214887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.214897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.215011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.215272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.547 [2024-10-07 07:49:20.215283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.547 qpair failed and we were unable to recover it. 00:30:16.547 [2024-10-07 07:49:20.215419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.215546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.215557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.215761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.215953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.215964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.216084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.216198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.216208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.216322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.216447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.216457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.216586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.216691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.216701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.216880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.216990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.217001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.217147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.217273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.217284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.217464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.217600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.217610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.217792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.218009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.218020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.218154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.218271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.218284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.218372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.218500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.218510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.218690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.218873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.218883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.218992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.219116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.219127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.219248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.219380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.219390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.219515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.219692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.219702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.219830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.220031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.220042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.220150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.220277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.220287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.220480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.220603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.220613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.220722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.220915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.220926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.221036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.221151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.221164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.221297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.221406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.221416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.221525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.221637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.221647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.221761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.221884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.221894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.222136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.222272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.222282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.222407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.222588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.222599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.548 qpair failed and we were unable to recover it. 00:30:16.548 [2024-10-07 07:49:20.222793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.548 [2024-10-07 07:49:20.222979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.222990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.223117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.223229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.223239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.223343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.223474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.223484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.223614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.223725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.223735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.223871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.224014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.224026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.224142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.224260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.224270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.224388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.224527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.224537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.224715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.224832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.224843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.224956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.225103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.225118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.225231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.225360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.225371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.225506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.225634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.225644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.225770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.225880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.225891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.226011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.226129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.226140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.226260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.226449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.226460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.226575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.226714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.226726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.226870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.227003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.227013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.227149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.227338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.227349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.227486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.227589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.227600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.227710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.227889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.227900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.228080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.228217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.228228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.228343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.228433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.228444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.228566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.228679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.228689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.228827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.228964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.228974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.229108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.229227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.229238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.229363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.229477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.229487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.229620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.229730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.549 [2024-10-07 07:49:20.229741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.549 qpair failed and we were unable to recover it. 00:30:16.549 [2024-10-07 07:49:20.229854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.229973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.229984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.230232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.230436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.230446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.230557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.230677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.230687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.230822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.230947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.230958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.231172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.231284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.231294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.231409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.231608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.231619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.231733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.231912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.231922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.232034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.232164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.232174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.232381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.232557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.232567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.232691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.232816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.232827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.232945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.233074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.233085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.233197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.233315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.233325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.233443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.233566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.233576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.233705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.233823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.233834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.233945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.234148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.234159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.234306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.234424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.234434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.234548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.234657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.234667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.234795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.234919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.234929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.235063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.235262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.235272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.235393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.235523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.235533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.235663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.235794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.235804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.235911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.236024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.236034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.236159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.236285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.236296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.236475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.236655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.236665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.236785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.236990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.237000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.237135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.237250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.550 [2024-10-07 07:49:20.237260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.550 qpair failed and we were unable to recover it. 00:30:16.550 [2024-10-07 07:49:20.237399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.237512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.237522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.237635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.237846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.237855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.237973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.238170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.238181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.238302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.238486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.238496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.238690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.238887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.238897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.239080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.239272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.239283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.239403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.239539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.239550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.239663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.239854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.239864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.239983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.240101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.240111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.240235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.240344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.240354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.240475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.240602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.240612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.240736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.240918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.240928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.241121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.241299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.241310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.241518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.241698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.241708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.241821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.242065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.242076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.242303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.242429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.242439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.242570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.242776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.242786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.243053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.243249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.243260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.243394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.243523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.243533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.243710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.243894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.243904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.244149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.244328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.244338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.244522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.244653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.244663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.551 qpair failed and we were unable to recover it. 00:30:16.551 [2024-10-07 07:49:20.244841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.551 [2024-10-07 07:49:20.244973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.244983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.245182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.245358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.245368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.245554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.245680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.245690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.245805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.245982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.245992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.246127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.246243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.246254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.246452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.246701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.246711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.246841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.247041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.247051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.247190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.247322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.247332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.247529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.247644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.247657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.247842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.247960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.247971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.248168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.248300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.248310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.248505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.248720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.248730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.248913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.249026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.249037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.249237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.249461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.249472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.249605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.249738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.249748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.250025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.250211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.250222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.250430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.250617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.250628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.250825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.251098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.251110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.251305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.251506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.251517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.251628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.251810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.251820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.252017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.252141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.252151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.252403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.252524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.252534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.252728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.252929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.252939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.253136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.253265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.253275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.253370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.253547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.253557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.253671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.253814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.253824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.254016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.254152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.254163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.254413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.254664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.254674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.254794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.254973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.254984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.255184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.255318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.255328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.255598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.255806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.255816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.255999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.256182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.256193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.552 qpair failed and we were unable to recover it. 00:30:16.552 [2024-10-07 07:49:20.256325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.552 [2024-10-07 07:49:20.256486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.256497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.256610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.256749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.256760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.257026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.257223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.257233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.257356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.257488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.257497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.257660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.257778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.257788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.257910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.258107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.258118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.258317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.258456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.258467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.258583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.258853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.258863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.258988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.259123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.259133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.259312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.259438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.259449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.259640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.259814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.259824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.260072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.260187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.260197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.260377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.260643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.260653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.260773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.260915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.260925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.261031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.261153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.261163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.261343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.261523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.261533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.261676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.261853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.261863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.262042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.262186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.262197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.262322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.262522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.262532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.262776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.262990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.263001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.263132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.263323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.263333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.263540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.263645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.263656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.263928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.264181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.264191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.264455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.264650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.264660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.264855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.265098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.265109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.265253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.265383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.265393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.265511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.265690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.265701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.265826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.265957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.265967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.266207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.266344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.266354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.266494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.266695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.266705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.266897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.267011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.267021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.267212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.267349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.267359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.267484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.267736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.267746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.267925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.268057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.268072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.268254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.268352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.268362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.268610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.268722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.268732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.268920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.269056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.269070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.269178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.269353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.269364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.269566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.269704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.269714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.269843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.269968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.269980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.270250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.270344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.270354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.270602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.270853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.270863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.271062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.271252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.271263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.271529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.271783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.271793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.272000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.272129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.272139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.553 [2024-10-07 07:49:20.272323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.272570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.553 [2024-10-07 07:49:20.272580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.553 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.272764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.272894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.272905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.273116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.273265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.273276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.273407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.273529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.273539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.273660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.273777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.273789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.273972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.274157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.274168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.274347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.274544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.274554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.274820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.275009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.275019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.275212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.275395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.275405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.275619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.275768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.275778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.275902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.276025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.276035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.276213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.276353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.276364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.276560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.276754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.276765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.276947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.277077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.277088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.277284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.277413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.277425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.277614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.277792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.277802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.278024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.278140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.278151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.278346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.278467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.278477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.278727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.278922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.278932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.279045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.279160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.279171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.279318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.279491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.279501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.279624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.279882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.279893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.280164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.280436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.280447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.280596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.280719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.280729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.280940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.281186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.281198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.281332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.281450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.281460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.281588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.281777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.281787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.281981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.282101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.282112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.282247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.282373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.282383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.282581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.282708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.282718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.282899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.283093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.283104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.283305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.283552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.283562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.554 qpair failed and we were unable to recover it. 00:30:16.554 [2024-10-07 07:49:20.283744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.554 [2024-10-07 07:49:20.283921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.283931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.284179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.284361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.284371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.284601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.284846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.284856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.285066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.285245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.285255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.285401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.285527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.285537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.285758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.285888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.285898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.286081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.286209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.286219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.286344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.286622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.286632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.286817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.287071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.287082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.287216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.287427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.287436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.287613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.287739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.287749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.287862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.288036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.288046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.288261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.288509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.288519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.288638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.288768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.288778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.289031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.289175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.289185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.289412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.289608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.289618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.289749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.289872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.289883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.290067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.290260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.290270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.290404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.290650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.290659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.290855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.291039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.291049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.291271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.291487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.291518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.291706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.291953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.291984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.292168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.292456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.292487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.292631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.292881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.292911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.293230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.293543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.293574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.293800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.294019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.294049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.294190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.294476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.555 [2024-10-07 07:49:20.294506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.555 qpair failed and we were unable to recover it. 00:30:16.555 [2024-10-07 07:49:20.294756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.294987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.295017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.295215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.295522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.295553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.295723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.296007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.296017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.296216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.296365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.296396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.296550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.296732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.296763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.297015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.297211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.297221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.297362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.297550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.297560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.297759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.297952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.297983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.298107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.298277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.298308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.298532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.298698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.298729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.298967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.299198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.299230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.299402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.299691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.299721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.300008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.300252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.300284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.300554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.300695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.300705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.300904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.301178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.301210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.301470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.301706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.301737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.301962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.302165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.302176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.302291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.302504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.302514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.302708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.302911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.302942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.303183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.303352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.303383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.303702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.303962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.303993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.304286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.304455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.304486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.304656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.304966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.304996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.305176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.305398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.305434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.305676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.305872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.305882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.306075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.306285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.306316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.556 [2024-10-07 07:49:20.306557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.306852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.556 [2024-10-07 07:49:20.306883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.556 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.307120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.307305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.307354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.307530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.307688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.307719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.307892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.308135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.308167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.308390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.308606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.308637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.308862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.309133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.309165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.309337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.309572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.309602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.309825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.310005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.310036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.310377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.310602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.310633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.310924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.311105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.311116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.311267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.311462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.311472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.311598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.311844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.311854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.312046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.312186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.312197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.312431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.312661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.312692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.312943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.313105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.313115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.313256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.313370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.313379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.313558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.313748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.313788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.313957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.314124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.314155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.314407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.314641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.314672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.314960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.315125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.315157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.315453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.315702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.315733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.315968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.316122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.316170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.316425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.316650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.316661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.316850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.317042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.317051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.317232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.317520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.317551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.557 [2024-10-07 07:49:20.317727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.317969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.557 [2024-10-07 07:49:20.317979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.557 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.318128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.318394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.318404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.318599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.318830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.318860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.319107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.319362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.319392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.319693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.319984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.320014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.320270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.320586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.320617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.320844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.321079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.321111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.321405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.321595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.321626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.321913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.322141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.322151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.322268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.322461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.322471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.322607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.322829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.322861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.323055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.323297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.323328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.323575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.323743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.323774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.324021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.324264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.324296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.324560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.324740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.324771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.325070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.325313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.325345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.325533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.325765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.325794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.326022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.326265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.326297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.326604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.326941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.326972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.327262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.327572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.327602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.327847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.328056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.328069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.328314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.328453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.328483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.328778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.329020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.329051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.329247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.329412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.329442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.329680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.329861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.329871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.330080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.330274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.330284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.330409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.330604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.330614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.330817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.331045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.331087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.558 qpair failed and we were unable to recover it. 00:30:16.558 [2024-10-07 07:49:20.331273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.558 [2024-10-07 07:49:20.331558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.331588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.331751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.331959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.331989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.332332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.332569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.332599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.332774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.332995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.333026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.333326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.333574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.333605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.333878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.334104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.334125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.334258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.334524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.334533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.334843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.335177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.335209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.335401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.335713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.335744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.336034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.336272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.336304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.336476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.336777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.336803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.336998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.337173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.337205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.337429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.337599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.337630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.337962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.338116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.338127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.338319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.338590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.338620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.338822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.338994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.339024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.339248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.339455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.339485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.339651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.339837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.339873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.340031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.340208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.340219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.340431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.340558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.340568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.340690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.340798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.340808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.340918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.341065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.341076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.341324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.341445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.341454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.341581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.341693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.341703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.341881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.342075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.559 [2024-10-07 07:49:20.342086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.559 qpair failed and we were unable to recover it. 00:30:16.559 [2024-10-07 07:49:20.342233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.342433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.342443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.342641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.342756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.342766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.343066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.343297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.343334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.343516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.343698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.343729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.344020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.344325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.344357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.344582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.344808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.344840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.345053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.345312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.345344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.345579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.345802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.345832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.346148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.346366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.346397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.346541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.346759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.346790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.347030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.347205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.347215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.347359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.347616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.347646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.347900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.348086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.348123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.348298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.348538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.348567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.348758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.348973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.349003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.349236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.349410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.349440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.349617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.349833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.349863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.350030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.350354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.350386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.350627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.350864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.350894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.351180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.351448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.351458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.351639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.351846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.351855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.352066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.352281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.352312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.352548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.352827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.352839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.353027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.353137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.353148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.353285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.353468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.353478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.353673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.353849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.353880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.354069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.354290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.354321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.560 qpair failed and we were unable to recover it. 00:30:16.560 [2024-10-07 07:49:20.354497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.354808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.560 [2024-10-07 07:49:20.354839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.355087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.355216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.355226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.355369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.355501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.355511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.355788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.355974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.356004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.356175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.356351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.356361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.356635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.356838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.356868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.357045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.357290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.357322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.357629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.357775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.357785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.357974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.358150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.358161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.358374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.358599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.358629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.358802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.359023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.359053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.359385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.359709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.359739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.359979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.360202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.360235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.360503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.360680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.360690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.360935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.361122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.361133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.361355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.361569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.361600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.361785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.362100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.362132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.362476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.362734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.362765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.363066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.363321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.363351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.363533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.363768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.363799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.364140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.364366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.364397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.364654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.364829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.364860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.365102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.365339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.365369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.365539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.365717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.365747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.366072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.366298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.366328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.366621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.366785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.366815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.367084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.367299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.367310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.561 [2024-10-07 07:49:20.367579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.367757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.561 [2024-10-07 07:49:20.367767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.561 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.367985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.368167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.368199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.368423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.368582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.368612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.368795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.368970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.368980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.369110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.369301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.369311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.369585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.369784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.369794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.369983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.370170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.370181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.370306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.370429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.370463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.370722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.371015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.371046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.371264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.371379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.371389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.371605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.371708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.371717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.371919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.372103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.372134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.372385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.372555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.372585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.372763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.373009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.373039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.373214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.373455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.373485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.373659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.373893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.373924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.374085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.374332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.374342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.374565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.374765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.374795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.374972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.375138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.375170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.375348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.375532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.375563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.375728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.375952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.375982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.376139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.376397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.376427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.376653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.376859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.376890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.377039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.377194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.377204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.377474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.377604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.377614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.377722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.377904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.377914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.378036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.378180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.378191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.378378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.378504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.378524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.562 qpair failed and we were unable to recover it. 00:30:16.562 [2024-10-07 07:49:20.378659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.562 [2024-10-07 07:49:20.378768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.378778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.379000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.379124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.379134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.379279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.379473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.379482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.379618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.379816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.379825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.380013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.380140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.380151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.380441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.380576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.380606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.380778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.380963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.380993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.381202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.381316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.381326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.381454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.381588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.381599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.381708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.381884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.381893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.382091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.382282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.382312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.382471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.382645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.382675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.382901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.383152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.383184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.383405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.383590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.383620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.383931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.384123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.384134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.384281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.384541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.384572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.384758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.385001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.385031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.385256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.385420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.385451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.385693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.385928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.385958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.386172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.386420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.386430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.386558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.386829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.386860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.387162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.387335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.387365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.387604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.387795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.387805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.387998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.388149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.388159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.388377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.388631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.388661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.563 qpair failed and we were unable to recover it. 00:30:16.563 [2024-10-07 07:49:20.388949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.563 [2024-10-07 07:49:20.389168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.389200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.389393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.389557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.389588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.389769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.389935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.389966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.390258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.390476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.390506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.390844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.391010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.391040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.391292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.391487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.391518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.391741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.391978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.392009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.392209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.392361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.392371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.392510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.392768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.392778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.393044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.393190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.393200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.393383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.393564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.393574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.393703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.393957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.393986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.394306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.394559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.394590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.394787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.395018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.395048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.395284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.395601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.395632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.395875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.396173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.396205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.396434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.396618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.396649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.396832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.396971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.396981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.397183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.397467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.397498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.397754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.398056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.398078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.398214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.398422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.398449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.398629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.398795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.398825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.399000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.399169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.399202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.399460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.399746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.399788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.399984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.400184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.400194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.400319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.400522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.564 [2024-10-07 07:49:20.400532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.564 qpair failed and we were unable to recover it. 00:30:16.564 [2024-10-07 07:49:20.400674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.400850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.400880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.401117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.401371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.401402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.401570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.401861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.401891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.402067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.402357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.402387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.402562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.402867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.402898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.403153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.403410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.403442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.403770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.404022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.404042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.404227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.404342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.404352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.404464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.404607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.404618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.404813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.405007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.405038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.405315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.405563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.405600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.405912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.406125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.406135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.406318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.406519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.406549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.406789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.407023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.407053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.407377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.407613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.407643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.407884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.408121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.408153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.408440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.408629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.408655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.408792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.409002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.409011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.409231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.409513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.409523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.409656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.409861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.409871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.410073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.410276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.410289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.410494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.410696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.410726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.410920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.411085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.411116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.411424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.411553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.411563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.411759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.411948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.411979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.412213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.412434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.412465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.412691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.412866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.412896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.413193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.413381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.413411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.413587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.413752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.565 [2024-10-07 07:49:20.413782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.565 qpair failed and we were unable to recover it. 00:30:16.565 [2024-10-07 07:49:20.414012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.414323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.414355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.414543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.414771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.414808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.415054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.415323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.415354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.415669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.415905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.415936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.416155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.416459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.416490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.416782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.416947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.416957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.417100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.417317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.417327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.417516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.417604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.417614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.417820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.417946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.417955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.418212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.418340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.418350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.418478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.418677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.418687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.418815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.419012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.419024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.419270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.419467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.419477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.419610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.419830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.419861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.420106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.420227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.420237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.420372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.420569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.420579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.420759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.420946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.420956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.421158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.421264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.421274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.421484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.421650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.421682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.421869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.422135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.422167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.422391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.422614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.422644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.422805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.422974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.423004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.423132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.423320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.423330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.423542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.423819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.423850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.424081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.424340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.424372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.424660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.424878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.424907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.425131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.425425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.425455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.425693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.425867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.425898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.426100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.426331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-10-07 07:49:20.426363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-10-07 07:49:20.426585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.426870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.426901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.427118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.427314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.427324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.427445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.427630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.427640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.427846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.427943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.427953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.428227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.428411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.428420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.428614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.428812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.428842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.429083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.429319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.429349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.429522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.429766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.429797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.429926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.430103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.430135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.430444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.430642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.430652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.430842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.431036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.431074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.431247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.431401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.431431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.431654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.431825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.431855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.432022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.432247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.432257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.432386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.432566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.432577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.432761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.432934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.432944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.433148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.433346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.433355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.433446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.433584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.433595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.433785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.433980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.433991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.434183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.434320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.434346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.434584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.434888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.434919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.435170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.435297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.435306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.435505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.435805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.435836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.436012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.436335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.436346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.436590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.436861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.436871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.437072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.437225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.437256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.437580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.437742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.437772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.437996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.438164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.438196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-10-07 07:49:20.438570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.438792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-10-07 07:49:20.438822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.439087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.439317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.439327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.439520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.439639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.439649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.439845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.440051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.440091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.440318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.440617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.440647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.440843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.441079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.441112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.441374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.441598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.441630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.441802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.441988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.442017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.442272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.442558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.442588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.442816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.442991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.443020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.443275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.443459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.443489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.443657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.443898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.443929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.444084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.444271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.444281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.444398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.444616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.444626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.444847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.444984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.445015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.445265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.445516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.445547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.445736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.445962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.445993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.446217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.446441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.446471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.446765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.446997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.447027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.447262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.447500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.447510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.447690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.447867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.447877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.448006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.448113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.448123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.448351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.448482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.448511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.448683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.448920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.448951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.449230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.449389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.449399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.449625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.449845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.449876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.450171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.450395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.450405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-10-07 07:49:20.450621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.450808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-10-07 07:49:20.450837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.451098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.451346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.451376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.451598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.451767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.451797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.452085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.452373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.452403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.452589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.452775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.452807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.453043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.453382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.453392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.453572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.453778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.453808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.454030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.454274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.454306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.454576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.454858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.454867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.455015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.455215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.455226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.455348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.455479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.455489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.455669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.455793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.455802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.456070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.456215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.456225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.456473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.456774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.456805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.457041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.457284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.457316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.457479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.457711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.457742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.458030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.458217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.458249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.458406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.458530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.458540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.458672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.458892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.458921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.459150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.459378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.459408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.459589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.459881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.459911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.460141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.460341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.460371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.460617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.460846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.460876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.461173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.461369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.461379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.461570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.461745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.461755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.461881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.462070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.462080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.462331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.462513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.462543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.462787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.462965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-10-07 07:49:20.462996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-10-07 07:49:20.463240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.463411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.463422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.463616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.463800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.463830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.464084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.464295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.464325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.464502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.464742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.464773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.465068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.465377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.465408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.465643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.465879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.465909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.466141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.466381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.466412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.466581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.466867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.466897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.467081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.467305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.467336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.467586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.467825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.467856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.468101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.468250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.468280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.468538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.468773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.468803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.468965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.469129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.469168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.469354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.469545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.469574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.469814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.470032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.470069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.470318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.470559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.470569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.470834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.471038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.471048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.471200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.471395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.471405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.471535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.471666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.471676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.471854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.471967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.471978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.472233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.472352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.472363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.472576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.472804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.472835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.473072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.473238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.473248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.473360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.473446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.473456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.473594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.473789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.473799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.473987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.474124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.474135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.474311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.474498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.474527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.474739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.474998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.475029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-10-07 07:49:20.475283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.475573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-10-07 07:49:20.475604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.475771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.476020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.476052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.476371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.476582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.476605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.476804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.476923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.476933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.477064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.477266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.477276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.477417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.477570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.477580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.477707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.477903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.477913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.478042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.478179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.478216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.478387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.478628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.478659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.478881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.479029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.479039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.479256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.479460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.479470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.479728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.479888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.479919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.480176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.480375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.480387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.480635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.480747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.480757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.480960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.481135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.481145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.481322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.481523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.481554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.481795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.481978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.482009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.482329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.482567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.482598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.482748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.482973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.483005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.483235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.483505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.483536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.483784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.484083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.484114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.484426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.484642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.484652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.484873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.484998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.485011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.485188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.485399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.485430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.485659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.485894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.485924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.486089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.486338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.486348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.486526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.486728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.486739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.486936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.487100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.487110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.487230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.487459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.487489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-10-07 07:49:20.487673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.487852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-10-07 07:49:20.487882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.488036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.488177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.488188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.488369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.488668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.488697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.488940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.489182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.489219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.489482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.489645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.489676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.489910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.490159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.490169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.490350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.490567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.490577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.490790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.491024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.491053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.491296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.491513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.491523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.491764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.491893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.491903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.492027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.492210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.492222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.492362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.492538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.492548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.492679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.492820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.492829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.492945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.493122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.493133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.493268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.493480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.493511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.493645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.493797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.493827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.494084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.494258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.494268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.494471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.494616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.494626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.494842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.494940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.494950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.495183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.495473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.495504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.495678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.495855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.495885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.496200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.496373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.496383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.496578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.496777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.496787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.496963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.497185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.497218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.497517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.497786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.497797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.497894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.498083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.498094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.498220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.498427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.498457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.498808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.499120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.499153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.499442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.499729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.499759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-10-07 07:49:20.500051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.500291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-10-07 07:49:20.500323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.500582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.500675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.500685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.500825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.500946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.500957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.501163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.501318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.501347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.501511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.501733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.501763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.501929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.502103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.502134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.502441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.502619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.502629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.502900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.503035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.503045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.503176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.503371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.503409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.503666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.503888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.503918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.504105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.504327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.504356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.504563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.504737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.504746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.505014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.505190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.505201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.505311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.505503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.505514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.505640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.505846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.505856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.506082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.506408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-10-07 07:49:20.506439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-10-07 07:49:20.506607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.506789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.506799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.845 qpair failed and we were unable to recover it. 00:30:16.845 [2024-10-07 07:49:20.507054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.507276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.507286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.845 qpair failed and we were unable to recover it. 00:30:16.845 [2024-10-07 07:49:20.507484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.507613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.507624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.845 qpair failed and we were unable to recover it. 00:30:16.845 [2024-10-07 07:49:20.507744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.507947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.507957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.845 qpair failed and we were unable to recover it. 00:30:16.845 [2024-10-07 07:49:20.508145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.508415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.508425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.845 qpair failed and we were unable to recover it. 00:30:16.845 [2024-10-07 07:49:20.508561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.508753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.508764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.845 qpair failed and we were unable to recover it. 00:30:16.845 [2024-10-07 07:49:20.508962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.509110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.509121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.845 qpair failed and we were unable to recover it. 00:30:16.845 [2024-10-07 07:49:20.509248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.509380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.509391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.845 qpair failed and we were unable to recover it. 00:30:16.845 [2024-10-07 07:49:20.509590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.509786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.509797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.845 qpair failed and we were unable to recover it. 00:30:16.845 [2024-10-07 07:49:20.509939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.510121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.510133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.845 qpair failed and we were unable to recover it. 00:30:16.845 [2024-10-07 07:49:20.510266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.510455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.510466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.845 qpair failed and we were unable to recover it. 00:30:16.845 [2024-10-07 07:49:20.510647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.510902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.510912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.845 qpair failed and we were unable to recover it. 00:30:16.845 [2024-10-07 07:49:20.511111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.511220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.511230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.845 qpair failed and we were unable to recover it. 00:30:16.845 [2024-10-07 07:49:20.511434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.511592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.511603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.845 qpair failed and we were unable to recover it. 00:30:16.845 [2024-10-07 07:49:20.511730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.845 [2024-10-07 07:49:20.512028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.512038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.512150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.512327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.512338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.512520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.512697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.512707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.512901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.513170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.513180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.513451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.513584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.513594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.513794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.513973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.513983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.514165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.514290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.514301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.514417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.514640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.514651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.514846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.514985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.514995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.515193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.515400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.515410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.515590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.515807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.515817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.515949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.516079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.516091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.516337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.516544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.516554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.516772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.516897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.516907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.517171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.517319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.517329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.517526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.517777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.517787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.517973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.518189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.518199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.518392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.518606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.518617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.518808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.518943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.518953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.519075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.519202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.519212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.519325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.519524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.519534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.519731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.519920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.519931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.520039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.520231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.520242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.520421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.520527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.520537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.520681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.520801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.520811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.521062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.521252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.521262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.521482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.521756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.521767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.522016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.522223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.522234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.522370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.522557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.846 [2024-10-07 07:49:20.522568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.846 qpair failed and we were unable to recover it. 00:30:16.846 [2024-10-07 07:49:20.522777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.522910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.522921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.523053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.523241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.523251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.523398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.523610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.523621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.523748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.523929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.523939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.524133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.524353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.524363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.524472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.524651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.524661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.524856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.524956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.524965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.525108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.525302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.525312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.525440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.525591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.525601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.525731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.525920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.525930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.526040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.526162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.526172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.526353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.526544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.526555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.526797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.526976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.526986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.527166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.527347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.527357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.527470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.527622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.527632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.527809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.528012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.528022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.528198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.528471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.528481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.528740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.528932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.528942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.529133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.529329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.529339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.529541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.529666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.529676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.529811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.530083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.530094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.530243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.530440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.530450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.530580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.530773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.530783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.530979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.531097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.531108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.531298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.531485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.531495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.531717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.531849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.531860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.532103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.532282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.532292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.532485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.532660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.532670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.532816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.533032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.847 [2024-10-07 07:49:20.533042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.847 qpair failed and we were unable to recover it. 00:30:16.847 [2024-10-07 07:49:20.533263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.533525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.533535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.533664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.533913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.533923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.534025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.534148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.534159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.534413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.534598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.534609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.534802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.535016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.535026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.535172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.535350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.535360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.535472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.535632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.535642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.535772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.535901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.535913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.536100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.536249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.536259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.536392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.536519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.536530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.536720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.536837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.536848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.536969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.537160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.537174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.537300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.537416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.537426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.537606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.537721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.537732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.537913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.538038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.538048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.538281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.538518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.538538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.538687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.538886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.538902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.539052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.539155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.539175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.539406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.539591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.539607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.539758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.539959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.539975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.540189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.540335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.540350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.540555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.540713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.540728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.540874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.540977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.540992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.541139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.541422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.541437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.541578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.541857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.541872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.542070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.542221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.542237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.542555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.542757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.542772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.542962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.543169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.543186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.543313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.543515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.848 [2024-10-07 07:49:20.543531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.848 qpair failed and we were unable to recover it. 00:30:16.848 [2024-10-07 07:49:20.543685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.543892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.543907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.544133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.544335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.544350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.544568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.544800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.544815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.544906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.545054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.545076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.545273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.545479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.545494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.545597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.545800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.545815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.545958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.546109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.546126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.546400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.546534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.546550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.546744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.546885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.546900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.547113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.547256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.547272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.547530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.547718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.547733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.547992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.548198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.548215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.548478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.548619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.548634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.548763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.548879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.548894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.549052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.549259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.549274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.549496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.549698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.549713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.549941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.550082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.550098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.550363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.550516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.550531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.550789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.550999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.551014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.551252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.551422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.551442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.551701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.551888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.551904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.552107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.552316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.552332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.552463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.552670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.552685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.552885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.553116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.553135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.553292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.553480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.553496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.553715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.553861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.553877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.849 [2024-10-07 07:49:20.554069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.554263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.849 [2024-10-07 07:49:20.554279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.849 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.554478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.554666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.554682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.554879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.555022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.555037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.555169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.555376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.555391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.555585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.555785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.555801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.556001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.556145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.556161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.556448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.556588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.556604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.556746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.556997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.557013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.557164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.557297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.557312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.557527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.557763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.557778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.557913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.558144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.558160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.558377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.558575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.558590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.558715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.558937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.558952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.559091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.559285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.559300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.559508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.559631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.559647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.559903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.560033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.560049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.560268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.560398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.560414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.560545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.560664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.560680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.560815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.561010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.561026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.561223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.561355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.561371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.561497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.561693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.561709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.561901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.562045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.562065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.562198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.562454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.562471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.562606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.562756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.562772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.562903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.563159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.563175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.563364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.563514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.563529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.850 qpair failed and we were unable to recover it. 00:30:16.850 [2024-10-07 07:49:20.563810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.563890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.850 [2024-10-07 07:49:20.563906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.564167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.564359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.564375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.564569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.564704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.564720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.564828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.564957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.564973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.565160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.565285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.565300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.565453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.565576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.565592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.565779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.565967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.565983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.566269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.566421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.566436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.566722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.566957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.566973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.567103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.567296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.567311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.567500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.567716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.567732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.567988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.568245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.568261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.568412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.568567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.568582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.568776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.568917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.568933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.569068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.569290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.569307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.569498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.569638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.569654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.569854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.570056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.570086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.570211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.570415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.570433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.570713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.570847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.570862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.570995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.571196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.571212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.571498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.571716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.571731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.571929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.572037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.572052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.572317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.572441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.572456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.572593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.572788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.572805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.572995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.573197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.573213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.573414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.573534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.573550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.573828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.574039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.574055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.574229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.574359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.574377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.574574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.574802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.574818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.851 qpair failed and we were unable to recover it. 00:30:16.851 [2024-10-07 07:49:20.574957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.851 [2024-10-07 07:49:20.575074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.575090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.575303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.575452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.575468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.575609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.575780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.575795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.576002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.576138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.576154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.576277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.576465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.576481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.576730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.576939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.576954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.577145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.577267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.577283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.577477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.577604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.577619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.577812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.578085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.578104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.578323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.578582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.578598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.578809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.579115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.579131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.579263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.579455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.579471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.579664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.579859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.579874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.580016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.580226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.580241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.580464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.580659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.580675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.580799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.581015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.581030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.581205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.581459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.581474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.581612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.581760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.581775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.581865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.582056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.582079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.582366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.582492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.582507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.582715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.582914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.582930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.583073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.583208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.583224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.583425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.583546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.583561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.583817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.583947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.583962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.584114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.584370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.584386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.584618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.584740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.584755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.585010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.585148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.585164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.585366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.585619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.585635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.585775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.585979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.585995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.852 qpair failed and we were unable to recover it. 00:30:16.852 [2024-10-07 07:49:20.586150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.852 [2024-10-07 07:49:20.586350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.586365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.586469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.586667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.586683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.586841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.586963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.586978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.587174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.587368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.587383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.587532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.587675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.587690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.587905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.588108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.588124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.588323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.588510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.588524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.588757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.588900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.588915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.589055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.589224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.589239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.589517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.589708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.589724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.589811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.589981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.589997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.590099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.590282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.590298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.590436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.590553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.590568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.590723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.590914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.590930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.591134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.591337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.591353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.591475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.591686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.591701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.591875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.592010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.592026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.592229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.592505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.592520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.592709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.592861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.592877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.593077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.593301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.593316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.593471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.593676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.593691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.593818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.594077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.594092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.594286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.594495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.594510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.594717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.594919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.594934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.595138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.595331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.595347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.595556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.595707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.595722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.595931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.596070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.596086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.596283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.596443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.596459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.596620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.596808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.596824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.853 qpair failed and we were unable to recover it. 00:30:16.853 [2024-10-07 07:49:20.597087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.853 [2024-10-07 07:49:20.597287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.597302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.597462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.597582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.597598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.597725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.597873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.597889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.598016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.598152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.598168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.598451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.598587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.598602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.598805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.598948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.598963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.599051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.599204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.599219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.599500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.599704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.599720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.599864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.600009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.600025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.600159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.600315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.600330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.600462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.600663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.600679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.600891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.601089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.601105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.601246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.601380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.601395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.601487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.601620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.601635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.601824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.601956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.601972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.602112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.602254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.602269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.602475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.602669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.602684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.602877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.603075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.603092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.603261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.603445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.603460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.603663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.603787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.603803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.604069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.604215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.604230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.604378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.604596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.604611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.604815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.604938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.604953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.605150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.605405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.605421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.605569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.605773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.605788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.854 [2024-10-07 07:49:20.605996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.606192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.854 [2024-10-07 07:49:20.606208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.854 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.606487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.606637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.606653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.606861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.607068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.607084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.607289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.607490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.607506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.607791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.607990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.608005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.608115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.608301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.608317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.608468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.608673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.608687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.608899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.609037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.609052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.609278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.609494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.609509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.609706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.609960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.609976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.610086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.610233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.610248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.610445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.610588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.610603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.610811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.611018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.611032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.611320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.611508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.611523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.611660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.611798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.611813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.612022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.612172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.612188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.612351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.612541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.612556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.612835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.613027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.613042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.613194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.613336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.613352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.613494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.613697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.613713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.613907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.614113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.614129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.614281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.614474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.614489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.614583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.614722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.614738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.614877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.615068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.615084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.615294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.615428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.615444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.615587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.615719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.615735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.615934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.616070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.616087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.616283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.616414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.616429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.616712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.616843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.616858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.855 [2024-10-07 07:49:20.617079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.617265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.855 [2024-10-07 07:49:20.617281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.855 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.617413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.617659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.617674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.617873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.618189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.618206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.618403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.618686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.618701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.619005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.619211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.619227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.619486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.619623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.619638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.619914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.620049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.620071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.620331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.620529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.620545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.620782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.620948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.620964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.621232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.621358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.621373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.621628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.621846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.621861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.622071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.622272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.622290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.622501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.622804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.622821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.623088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.623239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.623255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.623484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.623685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.623700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.623925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.624202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.624218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.624426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.624612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.624627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.624772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.625090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.625106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.625379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.625541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.625556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.625694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.625970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.625985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.626244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.626450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.626465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.626683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.626965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.626981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.627271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.627418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.627434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.627642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.627862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.627877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.628156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.628313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.628328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.628599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.628860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.628875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.629138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.629344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.629359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.629580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.629792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.629810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.630094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.630315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.630331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.856 [2024-10-07 07:49:20.630616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.630853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.856 [2024-10-07 07:49:20.630868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.856 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.631144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.631333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.631349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.631568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.631708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.631724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.632001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.632227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.632243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.632364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.632560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.632576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.632715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.632921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.632937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.633240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.633444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.633460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.633663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.633951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.633967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.634180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.634469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.634487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.634679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.634977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.634993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.635273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.635502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.635517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.635705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.635998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.636014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.636223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.636458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.636473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.636686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.636895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.636910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.637163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.637364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.637379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.637637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.637822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.637837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.638093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.638359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.638375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.638636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.638913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.638928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.639227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.639510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.639529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.639822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.640082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.640098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.640378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.640632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.640647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.640925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.641219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.641235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.641360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.641567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.641582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.641787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.642075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.642091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.642353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.642634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.642650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.857 [2024-10-07 07:49:20.642919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.643148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.857 [2024-10-07 07:49:20.643164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.857 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.643320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.643523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.643539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.643749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.643895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.643910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.644109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.644344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.644363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.644517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.644651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.644665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.644877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.645132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.645148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.645454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.645672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.645688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.645905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.646095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.646111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.646324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.646592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.646608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.646807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.647069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.647085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.647355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.647611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.647626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.647889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.648114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.648131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.648415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.648684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.648699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.648975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.649183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.649199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.649439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.649693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.649708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.650008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.650227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.650242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.650508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.650726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.650741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.650928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.651184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.651200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.651466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.651669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.651684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.651872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.652072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.652088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.652242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.652512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.652528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.652751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.653056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.653078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.653366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.653587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.653602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.653863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.654133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.654148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.654389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.654592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.654608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.654889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.655151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.655167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.655367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.655623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.655638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.655893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.656189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.656205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.656488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.656712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.656727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.858 [2024-10-07 07:49:20.656995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.657271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.858 [2024-10-07 07:49:20.657286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.858 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.657495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.657752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.657768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.658073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.658270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.658286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.658562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.658836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.658852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.659071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.659215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.659231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.659462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.659587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.659602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.659880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.660106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.660123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.660336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.660591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.660606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.660890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.661023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.661038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.661324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.661607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.661623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.661897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.662202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.662218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.662497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.662728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.662743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.663014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.663298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.663314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.663522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.663802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.663817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.664020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.664282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.664299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.664508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.664716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.664732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.665016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.665219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.665235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.665492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.665795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.665811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.666088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.666349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.666365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.666497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.666725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.666741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.667024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.667315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.667331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.667540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.667729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.859 [2024-10-07 07:49:20.667745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.859 qpair failed and we were unable to recover it. 00:30:16.859 [2024-10-07 07:49:20.668022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.668209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.668225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.668522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.668809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.668824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.669037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.669328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.669344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.669629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.669839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.669854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.670046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.670264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.670280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.670566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.670712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.670727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.671010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.671216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.671232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.671439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.671581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.671597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.671831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.672109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.672125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.672398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.672679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.672694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.672904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.673093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.673109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.673251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.673528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.673544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.673827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.674011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.674027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.674245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.674530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.674546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.674755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.675034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.675049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.675186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.675442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.675457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.675764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.676022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.676037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.676305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.676544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.676559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.676750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.677025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.677040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.677234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.677505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.677521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.677748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.677965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.677980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.678262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.678461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.678477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.678617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.678825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.678839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.679052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.679336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.679352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.679565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.679780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.679796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.680005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.680238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.680254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.680490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.680783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.680798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.680952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.681233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.681249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.681529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.681804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.860 [2024-10-07 07:49:20.681819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.860 qpair failed and we were unable to recover it. 00:30:16.860 [2024-10-07 07:49:20.682096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.682367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.682382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.682661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.682944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.682960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.683239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.683576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.683592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.683874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.684150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.684166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.684453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.684736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.684751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.684944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.685231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.685248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.685439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.685662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.685677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.685910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.686151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.686168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.686419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.686700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.686715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.687045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.687320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.687336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.687620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.687855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.687870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.688185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.688407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.688422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.688584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.688864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.688880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.689069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.689273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.689289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.689494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.689711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.689726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.690024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.690286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.690302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.690492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.690748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.690763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.690963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.691169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.691185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.691465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.691611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.691626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.691844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.692063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.692079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.692289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.692510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.692525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.692804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.693078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.693094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.693297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.693502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.693517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.693707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.693910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.693925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.694139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.694282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.694297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.694579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.694712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.694727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.694935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.695211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.861 [2024-10-07 07:49:20.695226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.861 qpair failed and we were unable to recover it. 00:30:16.861 [2024-10-07 07:49:20.695432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.695713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.695729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.695989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.696268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.696284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.696476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.696706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.696721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.696977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.697265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.697281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.697482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.697622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.697637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.697864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.698143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.698159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.698438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.698716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.698732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.699013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.699293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.699310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.699584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.699824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.699839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.699975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.700130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.700146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.700347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.700625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.700640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.700863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.701078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.701094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.701307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.701561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.701576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.701838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.702127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.702144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.702421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.702726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.702741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.703041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.703336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.703352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.703614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.703850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.703865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.704068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.704278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.704294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.704577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.704786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.704801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.705081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.705352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.705367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.705648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.705902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.705917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.706153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.706292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.706308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.706522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.706745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.706761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.706955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.707155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.707171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.707466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.707764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.707780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.708054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.708291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.708306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.708509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.708788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.708803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.862 qpair failed and we were unable to recover it. 00:30:16.862 [2024-10-07 07:49:20.709101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.862 [2024-10-07 07:49:20.709353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.709371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.709644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.709904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.709919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.710176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.710446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.710461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.710741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.710938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.710954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.711189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.711397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.711412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.711625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.711839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.711854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.712091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.712375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.712391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.712664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.712932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.712963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.713297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.713609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.713640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.713966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.714185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.714217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.714553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.714866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.714902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.715235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.715543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.715573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.715910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.716145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.716177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.716435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.716605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.716634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.716900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.717174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.717190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.717408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.717626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.717656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.717810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.718073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.718106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.718413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.718722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.718738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.718944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.719175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.719207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.719470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.719792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.719808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.720090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.720384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.720403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.720659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.720974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.721005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.721263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.721571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.721600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.721841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.722148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.722180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.722509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.722770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.722784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.723098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.723320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.723335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.723535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.723767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.723798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.863 qpair failed and we were unable to recover it. 00:30:16.863 [2024-10-07 07:49:20.724112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.724445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.863 [2024-10-07 07:49:20.724476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.724710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.724948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.724979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.725295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.725584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.725614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.725849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.726122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.726141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.726442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.726699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.726715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.726864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.727158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.727190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.727531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.727836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.727866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.728098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.728384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.728414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.728688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.728927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.728957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.729265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.729489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.729519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.729766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.730041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.730097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.730433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.730715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.730746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.731073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.731387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.731417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.731701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.731936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.731968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.732236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.732500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.732516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.732670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.732932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.732962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.733276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.733499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.733529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.733869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.734104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.734135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.734461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.734770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.734800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.735091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.735321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.735351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.735671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.735909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.735939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.736254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.736570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.736601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.736891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.737203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.737235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.737473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.737774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.864 [2024-10-07 07:49:20.737790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.864 qpair failed and we were unable to recover it. 00:30:16.864 [2024-10-07 07:49:20.737998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.738162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.738193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.738429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.738655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.738685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.739021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.739283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.739314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.739551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.739832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.739863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.740107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.740351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.740382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.740647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.740960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.740975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.741245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.741530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.741561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.741908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.742206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.742222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.742505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.742695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.742710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.742997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.743263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.743295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.743561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.743883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.743898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.744113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.744398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.744429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.744657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.744966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.744996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.745306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.745572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.745601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.745931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.746140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.746155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.746386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.746574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.746589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.746718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.746865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.746880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.747079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.747321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.747351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.747632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.747917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.747947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.748184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.748472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.748503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.748718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.749006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.749037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.749286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.749437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.749466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.749632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.749912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.749943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.750178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.750478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.750508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.750850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.751158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.751190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.751433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.751747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.751777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.752079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.752421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.752451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.752776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.753018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.753049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.865 [2024-10-07 07:49:20.753245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.753491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.865 [2024-10-07 07:49:20.753523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.865 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.753753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.754038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.754081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.754428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.754715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.754745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.755070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.755386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.755416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.755666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.755974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.756003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.756263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.756600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.756631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.756867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.757110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.757126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.757383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.757592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.757607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.757873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.758081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.758097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.758363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.758660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.758676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.758869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.759096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.759112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.759305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.759450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.759465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.759677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.759985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.760000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.760217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.760449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.760464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.760677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.760971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.760986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.761267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.761524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.761540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.761751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.761947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.761963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.762271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.762543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.762559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.762863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.763143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.763160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.763379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.763634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.763650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.763797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.763928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.763943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.764227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.764532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.764548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.764784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.765090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.765119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.765367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.765513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.765524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.765711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.765908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.765918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.766240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.766396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.766409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.766609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.766758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.766768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.766898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.767088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.767101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.767290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.767467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.767478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.866 [2024-10-07 07:49:20.767675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.767896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.866 [2024-10-07 07:49:20.767907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.866 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.768102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.768293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.768303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.768582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.768879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.768889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.769086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.769294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.769305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.769443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.769586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.769598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.769804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.770072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.770083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.770346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.770559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.770569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.770722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.770922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.770934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.771124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.771268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.771278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.771471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.771727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.771738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.771936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.772150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.772160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.772303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.772502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.772512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.772764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.772993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.773004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.773316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.773572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.773583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.773778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.773967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.773978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.774187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.774402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.774413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.774701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.774909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.774920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.775211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.775398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.775408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.775667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.775912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.775923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.776190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.776455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.776465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.776661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.776908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.776919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.777108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.777376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.777386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.777514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.777784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.777795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.778057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.778296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.778307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.778555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.778843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.778853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.779039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.779314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.779325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.779522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.779829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.779839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.780035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.780306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.780317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.780516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.780661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.780671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.867 qpair failed and we were unable to recover it. 00:30:16.867 [2024-10-07 07:49:20.780863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.867 [2024-10-07 07:49:20.781134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.781145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.781437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.781706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.781716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.782000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.782184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.782195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.782447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.782726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.782736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.783008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.783260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.783271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.783501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.783796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.783806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.784017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.784193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.784204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.784476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.784669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.784680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.784874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.784991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.785001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.785271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.785403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.785414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.785679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.785888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.785899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.786104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.786339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.786370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.786592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.786856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.786896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.787158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.787351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.787361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.787632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.787894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.787905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.788134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.788346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.788376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.788696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.788929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.788939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.789116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.789343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.789374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.789675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.789981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.790011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.790349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.790682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.790713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.791005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.791322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.791355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.791650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.791968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.791978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.792180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.792404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.792434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.792670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.792921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.792951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.868 [2024-10-07 07:49:20.793246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.793442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.868 [2024-10-07 07:49:20.793454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.868 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.793722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.793992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.794003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.794247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.794509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.794519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.794791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.794933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.794943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.795222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.795466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.795496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.795792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.796080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.796113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.796255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.796527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.796558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.796806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.797098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.797131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.797476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.797670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.797680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.797886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.798084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.798095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.798397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.798623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.798635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.798945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.799208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.799240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.799565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.799814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.799845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.800099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.800315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.800325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.800607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.800907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.800916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.801127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.801332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.801363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.801675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.802014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.802025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:16.869 [2024-10-07 07:49:20.802312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.802528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.869 [2024-10-07 07:49:20.802538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:16.869 qpair failed and we were unable to recover it. 00:30:17.139 [2024-10-07 07:49:20.802809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.139 [2024-10-07 07:49:20.803102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.139 [2024-10-07 07:49:20.803113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.139 qpair failed and we were unable to recover it. 00:30:17.139 [2024-10-07 07:49:20.803224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.803395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.803405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.803593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.803863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.803875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.804123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.804302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.804311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.804585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.804734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.804744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.805038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.805316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.805328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.805523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.805778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.805788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.806032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.806247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.806258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.806434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.806614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.806624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.806917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.807114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.807125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.807331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.807529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.807559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.807882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.808199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.808232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.808494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.808815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.808851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.809099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.809275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.809305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.809614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.809947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.809978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.810218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.810493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.810524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.810866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.811176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.811209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.811474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.811781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.811811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.812174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.812482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.812512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.812745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.813030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.813069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.813416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.813710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.813741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.814103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.814375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.814406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.814745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.814980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.815012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.815313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.815570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.815580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.815854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.815980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.815990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.816291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.816595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.816627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.816893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.817132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.817165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.140 qpair failed and we were unable to recover it. 00:30:17.140 [2024-10-07 07:49:20.817408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.817626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.140 [2024-10-07 07:49:20.817636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.817837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.818085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.818118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.818360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.818605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.818635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.818948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.819192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.819202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.819401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.819534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.819543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.819779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.820021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.820053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.820336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.820517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.820548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.820739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.821028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.821070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.821406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.821641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.821672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.821977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.822223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.822233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.822503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.822775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.822785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.823031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.823220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.823231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.823405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.823610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.823641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.824001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.824244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.824276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.824571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.824913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.824943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.825187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.825511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.825542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.825776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.826087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.826119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.826447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.826763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.826793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.827105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.827362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.827372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.827572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.827833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.827843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.828141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.828278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.828289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.828566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.828803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.828833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.829082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.829335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.829366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.829681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.830006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.830036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.830310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.830534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.830564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.830882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.831207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.831218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.831416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.831551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.831561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.141 qpair failed and we were unable to recover it. 00:30:17.141 [2024-10-07 07:49:20.831837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.141 [2024-10-07 07:49:20.832147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.832180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.832426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.832604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.832634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.832884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.833217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.833249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.833509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.833820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.833859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.834106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.834329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.834339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.834634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.834828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.834838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.835117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.835362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.835372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.835552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.835819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.835828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.836035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.836326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.836359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.836657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.836914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.836924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.837069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.837202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.837212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.837503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.837747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.837756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.838002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.838275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.838285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.838413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.838600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.838610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.838804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.838999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.839040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.839248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.839423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.839454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.839799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.840132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.840164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.840472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.840684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.840694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.840964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.841250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.841261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.841464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.841756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.841766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.842038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.842172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.842183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.842319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.842514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.842524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.842716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.842898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.842908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.843185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.843381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.843412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.843649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.843857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.843867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.844046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.844303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.844336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.844508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.844763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.844794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.142 qpair failed and we were unable to recover it. 00:30:17.142 [2024-10-07 07:49:20.845039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.142 [2024-10-07 07:49:20.845269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.845279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.845455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.845580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.845590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.845776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.846004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.846035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.846290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.846580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.846610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.846899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.847224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.847257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.847427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.847712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.847743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.847985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.848220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.848252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.848497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.848834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.848865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.849108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.849402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.849412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.849679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.849932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.849942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.850120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.850253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.850263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.850483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.850746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.850757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.851008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.851138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.851148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.851286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.851485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.851495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.851694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.851969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.852000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.852316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.852551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.852581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.852856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.853138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.853149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.853415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.853606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.853616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.853885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.854136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.854168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.854457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.854712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.854744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.855056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.855213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.855223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.855526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.855817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.855827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.856080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.856351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.856382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.856605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.856920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.856950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.857278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.857522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.857553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.857882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.858233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.858265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.858537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.858772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.858804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.859042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.859370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.859402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.143 [2024-10-07 07:49:20.859632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.859895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.143 [2024-10-07 07:49:20.859926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.143 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.860189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.860448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.860458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.860638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.860849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.860879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.861162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.861417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.861451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.861719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.861958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.861969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.862147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.862345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.862356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.862552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.862851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.862896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.863198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.863467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.863498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.863734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.863901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.863933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.864154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.864347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.864378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.864614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.864877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.864908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.865157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.865406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.865437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.865660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.865894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.865925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.866162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.866422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.866453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.866687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.866986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.867016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.867212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.867542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.867573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.867847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.868096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.868106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.868418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.868566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.868576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.868877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.869085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.869096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.869278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.869512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.869543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.869786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.870078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.870089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.870372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.870662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.870693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.871005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.871283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.871310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.871518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.871771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.871801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.144 qpair failed and we were unable to recover it. 00:30:17.144 [2024-10-07 07:49:20.871958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.144 [2024-10-07 07:49:20.872256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.872294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.872582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.872769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.872778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.873046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.873251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.873262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.873485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.873711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.873721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.873902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.874158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.874191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.874530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.874845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.874875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.875132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.875378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.875409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.875596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.875896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.875926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.876220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.876372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.876383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.876604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.876739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.876769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.877073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.877319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.877356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.877651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.877901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.877932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.878166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.878437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.878468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.878800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.879137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.879169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.879503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.879788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.879818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.880057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.880304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.880335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.880600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.880781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.880811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.881067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.881296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.881306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.881458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.881633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.881642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.881808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.881978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.882009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.882255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.882379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.882391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.882522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.882772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.882782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.882926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.883139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.883149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.883264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.883527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.883537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.883716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.883987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.884018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.884368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.884606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.884636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.884937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.885091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.885102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.885324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.885511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.885542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.885798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.886100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.886133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.145 qpair failed and we were unable to recover it. 00:30:17.145 [2024-10-07 07:49:20.886399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.145 [2024-10-07 07:49:20.886654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.886684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.887016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.887249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.887288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.887589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.887836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.887866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.888106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.888336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.888346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.888546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.888674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.888684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.889007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.889247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.889279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.889498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.889691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.889701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.889913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.890177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.890209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.890522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.890845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.890875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.891115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.891355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.891385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.891573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.891808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.891838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.892104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.892342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.892352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.892552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.892844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.892854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.893142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.893411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.893421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.893667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.893929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.893940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.894134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.894358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.894389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.894637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.894950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.894980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.895195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.895473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.895504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.895676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.896005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.896035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.896291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.896601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.896631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.896877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.897211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.897243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.897416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.897654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.897684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.897996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.898287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.898319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.898657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.898902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.898932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.899248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.899451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.899481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.899673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.899985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.900016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.900353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.900666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.900697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.900934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.901226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.901258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.146 [2024-10-07 07:49:20.901576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.901816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.146 [2024-10-07 07:49:20.901846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.146 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.902163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.902360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.902370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.902587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.902792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.902802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.903047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.903269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.903279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.903480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.903760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.903791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.904088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.904284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.904315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.904540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.904853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.904883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.905201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.905519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.905549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.905852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.906129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.906139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.906394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.906639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.906649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.906909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.907182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.907192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.907440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.907631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.907641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.907912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.908170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.908181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.908361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.908650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.908680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.908874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.909119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.909152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.909390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.909687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.909717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.909957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.910259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.910291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.910536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.910760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.910770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.910968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.911167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.911177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.911445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.911735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.911765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.912023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.912328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.912361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.912588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.912816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.912848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.913103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.913292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.913324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.913561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.913830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.913861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.914141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.914428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.914438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.914636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.914934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.914965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.915276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.915509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.915519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.915717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.915968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.915998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.916221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.916454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.916465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.147 [2024-10-07 07:49:20.916669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.916906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.147 [2024-10-07 07:49:20.916936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.147 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.917251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.917472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.917502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.917723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.917966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.917997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.918308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.918641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.918671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.918855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.919167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.919199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.919443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.919617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.919648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.919899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.920082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.920092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.920372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.920510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.920520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.920783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.921045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.921088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.921336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.921585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.921616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.921937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.922138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.922148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.922438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.922693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.922703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.922980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.923255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.923266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.923512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.923775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.923785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.924052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.924205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.924215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.924412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.924619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.924629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.924835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.925026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.925056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.925302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.925547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.925577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.925918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.926140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.926173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.926465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.926799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.926830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.927144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.927384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.927421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.927625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.927817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.927847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.928088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.928320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.928357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.928628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.928876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.928886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.929066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.929338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.929368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.929743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.930048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.930090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.930329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.930619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.930629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.930924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.931227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.931256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.931583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.931908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.148 [2024-10-07 07:49:20.931939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.148 qpair failed and we were unable to recover it. 00:30:17.148 [2024-10-07 07:49:20.932264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.932504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.932534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.932857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.933076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.933108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.933371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.933653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.933683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.934000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.934251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.934283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.934525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.934834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.934864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.935219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.935505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.935535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.935827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.936138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.936170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.936424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.936679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.936710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.937017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.937290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.937301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.937476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.937788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.937798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.937980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.938161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.938171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.938456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.938687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.938718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.939018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.939242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.939253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.939451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.939666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.939696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.939928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.940244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.940294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.940609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.940918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.940948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.941187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.941378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.941388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.941525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.941811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.941821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.942122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.942348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.942378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.942556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.942748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.942778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.943025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.943347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.943358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.943569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.943786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.943816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.944109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.944317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.944327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.944522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.944788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.944819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.945067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.945372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.945383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.149 qpair failed and we were unable to recover it. 00:30:17.149 [2024-10-07 07:49:20.945665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.149 [2024-10-07 07:49:20.945938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.945948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.946138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.946419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.946449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.946718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.946971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.947001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.947255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.947564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.947595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.947917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.948245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.948278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.948568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.948806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.948837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.949078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.949393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.949424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.949681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.949926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.949957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.950274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.950581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.950591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.950844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.951112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.951144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.951332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.951628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.951637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.951910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.952089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.952100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.952242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.952440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.952470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.952708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.953002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.953042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.953297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.953566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.953576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.953780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.954025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.954035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.954278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.954566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.954577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.954845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.955094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.955104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.955283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.955554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.955585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.955898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.956229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.956262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.956507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.956827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.956858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.957105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.957450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.957487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.957803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.958130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.958162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.958475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.958809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.958839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.959078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.959301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.959332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.959570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.959799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.959830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.960098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.960388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.960419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.960759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.961052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.961066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.961368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.961658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.961688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.150 qpair failed and we were unable to recover it. 00:30:17.150 [2024-10-07 07:49:20.962001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.150 [2024-10-07 07:49:20.962338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.962370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.962558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.962859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.962890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.963117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.963406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.963442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.963669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.963955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.963985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.964288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.964591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.964622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.964964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.965195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.965228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.965553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.965842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.965873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.966052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.966283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.966314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.966555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.966795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.966826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.967084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.967405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.967435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.967681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.967900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.967931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.968163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.968406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.968436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.968730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.969016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.969052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.969363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.969678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.969709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.970044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.970360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.970391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.970720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.970951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.970982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.971215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.971560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.971591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.971905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.972194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.972226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.972568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.972875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.972906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.973192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.973387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.973397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.973649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.973912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.973944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.974193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.974423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.974433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.974631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.974900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.974936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.975254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.975509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.975540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.975868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.976185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.976218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.976448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.976667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.976697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.976945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.977262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.977294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.977594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.977813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.977844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.151 [2024-10-07 07:49:20.978142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.978401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.151 [2024-10-07 07:49:20.978411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.151 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.978605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.978900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.978931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.979240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.979489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.979521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.979780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.979962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.979993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.980298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.980591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.980622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.980946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.981267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.981300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.981593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.981833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.981864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.982160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.982474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.982504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.982804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.983071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.983103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.983354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.983663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.983693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.983964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.984197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.984229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.984539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.984838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.984869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.985165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.985330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.985360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.985698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.985960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.985991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.986233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.986473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.986504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.986806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.987084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.987117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.987410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.987633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.987664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.987841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.988128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.988160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.988396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.988689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.988719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.988958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.989197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.989229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.989549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.989836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.989846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.990131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.990378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.990388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.990569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.990837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.990868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.991145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.991436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.991446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.991659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.991972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.991982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.992263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.992526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.992556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.992866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.993127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.993160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.993454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.993777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.993787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.994032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.994222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.994233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.152 qpair failed and we were unable to recover it. 00:30:17.152 [2024-10-07 07:49:20.994464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.994774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.152 [2024-10-07 07:49:20.994806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:20.995081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.995347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.995357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:20.995540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.995836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.995866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:20.996130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.996455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.996486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:20.996739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.997051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.997093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:20.997413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.997577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.997608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:20.997960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.998254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.998264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:20.998516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.998794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.998825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:20.999190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.999378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.999410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:20.999720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.999938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:20.999969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.000298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.000548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.000580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.000865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.001108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.001142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.001438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.001774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.001806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.002031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.002373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.002405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.002687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.002937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.002968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.003272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.003553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.003585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.003959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.004193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.004226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.004476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.004719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.004749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.004923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.005232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.005266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.005567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.005774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.005785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.005987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.006280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.006290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.006492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.006661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.006671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.006856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.007096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.007107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.007385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.007512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.007523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.007728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.007925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.007935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.008210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.008499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.008510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.153 qpair failed and we were unable to recover it. 00:30:17.153 [2024-10-07 07:49:21.008719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.153 [2024-10-07 07:49:21.008999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.009009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.009316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.009511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.009521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.009793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.009974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.009985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.010190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.010372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.010383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.010656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.010783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.010793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.011001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.011262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.011273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.011498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.011722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.011732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.012001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.012285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.012296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.012558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.012706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.012717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.012970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.013225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.013236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.013510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.013769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.013780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.014030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.014231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.014241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.014526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.014728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.014739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.014858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.015114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.015125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.015333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.015554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.015565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.015830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.016098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.016112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.016310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.016529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.016540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.016805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.017045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.017055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.017190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.017404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.017415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.017542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.017842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.017853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.018153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.018415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.018426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.018644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.018942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.018953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.019133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.019345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.019355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.019607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.019868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.019878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.020164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.020356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.020367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.020637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.020767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.020778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.020960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.021245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.021257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.154 qpair failed and we were unable to recover it. 00:30:17.154 [2024-10-07 07:49:21.021369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.154 [2024-10-07 07:49:21.021571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.021582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.021780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.022023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.022034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.022305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.022500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.022511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.022815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.023093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.023104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.023306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.023578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.023588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.023790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.024042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.024052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.024260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.024535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.024546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.024777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.024995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.025006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.025117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.025338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.025349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.025620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.025866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.025876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.026139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.026273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.026283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.026462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.026730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.026740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.026875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.027142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.027153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.027291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.027560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.027571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.027849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.028092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.028103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.028350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.028544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.028554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.028770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.029044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.029054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.029286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.029552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.029563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.029823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.030072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.030082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.030286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.030461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.030471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.155 [2024-10-07 07:49:21.030666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.030960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.155 [2024-10-07 07:49:21.030971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.155 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.031192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.031394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.031404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.031693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.031964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.031974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.032201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.032399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.032409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.032679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.032881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.032891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.033082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.033285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.033295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.033541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.033756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.033766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.033964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.034233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.034243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.034507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.034687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.034697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.034943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.035210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.035221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.035350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.035618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.035628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.035887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.036066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.036077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.036324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.036572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.036582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.036821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.037088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.037099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.037304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.037440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.037450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.037628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.037892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.037902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.038096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.038311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.038322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.038568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.038884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.038894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.039036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.039249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.039260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.039505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.039764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.039774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.040019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.040194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.040205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-10-07 07:49:21.040476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.156 [2024-10-07 07:49:21.040602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.040612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.040882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.041074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.041084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.041329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.041605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.041617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.041892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.042083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.042094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.042340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.042519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.042529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.042724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.042864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.042874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.043068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.043335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.043345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.043539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.043761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.043771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.044018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.044291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.044302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.044557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.044825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.044835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.045074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.045322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.045333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.045606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.045810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.045821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.046001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.046282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.046295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.046491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.046701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.046711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.046960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.047228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.047239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.047451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.047644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.047654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.047907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.048028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.048038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.048320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.048609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.048619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.048913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.049159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.049169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.049415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.049695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.049705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-10-07 07:49:21.049903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.050196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.157 [2024-10-07 07:49:21.050207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.050399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.050648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.050659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.050933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.051192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.051204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.051496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.051708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.051718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.051963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.052158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.052169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.052434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.052677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.052687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.052876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.053110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.053121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.053393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.053577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.053588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.053790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.053994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.054004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.054268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.054540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.054550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.054849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.055109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.055120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.055341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.055557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.055567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.055748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.055990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.056003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.056199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.056384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.056394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.056642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.056770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.056780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.057046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.057322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.057333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.057534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.057823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.057833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.058025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.058238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.058249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.058446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.058570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.058580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.058771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.059039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.059049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.059170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.059444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.158 [2024-10-07 07:49:21.059454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.158 [2024-10-07 07:49:21.059668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.059857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.059867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.060121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.060386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.060396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.060617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.060854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.060865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.061012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.061200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.061211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.061388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.061582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.061593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.061790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.062006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.062017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.062274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.062526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.062536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.062805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.063009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.063019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.063314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.063446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.063457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.063681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.063946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.063957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.064228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.064410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.064420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.064638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.064830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.064841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.065021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.065198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.065208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.065327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.065504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.065515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.065805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.066050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.066066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.066261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.066507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.066517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.159 qpair failed and we were unable to recover it. 00:30:17.159 [2024-10-07 07:49:21.066727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-10-07 07:49:21.066989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.066999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.067257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.067434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.067444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.067582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.067783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.067793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.067919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.068170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.068181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.068380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.068619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.068630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.068832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.069064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.069075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.069345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.069563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.069574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.069852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.070032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.070042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.070241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.070450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.070460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.070676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.070819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.070829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.071109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.071399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.071409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.071616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.071824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.071834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.072028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.072224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.072235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.072373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.072619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.072629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.072896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.073074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.073085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.073359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.073556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.073567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.073816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.073992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.074002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.074202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.074399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.074409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.074668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.074910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.074920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.075165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.075453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.075463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.075590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.075787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-10-07 07:49:21.075798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.160 qpair failed and we were unable to recover it. 00:30:17.160 [2024-10-07 07:49:21.076066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.076194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.076204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.076474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.076784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.076794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.076990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.077197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.077229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.077549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.077859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.077890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.078221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.078465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.078496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.078686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.078995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.079026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.079359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.079597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.079628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.079943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.080254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.080286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.080515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.080806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.080837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.081183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.081424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.081455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.081752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.082037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.082079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.082321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.082609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.082640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.082939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.083182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.083214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.083402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.083636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.083667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.083983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.084234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.084267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.084588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.084915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.084946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.085271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.085606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.085638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.085974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.086311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.086343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.086608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.086897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.086937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.087216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.087495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.087526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.161 qpair failed and we were unable to recover it. 00:30:17.161 [2024-10-07 07:49:21.087849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-10-07 07:49:21.088105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.088137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.088474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.088759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.088790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.089103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.089440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.089472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.089779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.090116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.090149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.090395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.090718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.090749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.091078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.091396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.091427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.091762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.092019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.092051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.092427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.092735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.092767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.093103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.093415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.093447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.093681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.093956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.093986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.094231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.094473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.094484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.094663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.094907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.094918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.095186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.095417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.095449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.095766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.096090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.096122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.096442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.096699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.096729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.097055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.097259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.097270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.097519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.097765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.097796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.098033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.098275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.098308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.098599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.098865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.162 [2024-10-07 07:49:21.098875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.162 qpair failed and we were unable to recover it. 00:30:17.162 [2024-10-07 07:49:21.099133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.433 [2024-10-07 07:49:21.099309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.433 [2024-10-07 07:49:21.099319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.433 qpair failed and we were unable to recover it. 00:30:17.433 [2024-10-07 07:49:21.099590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.433 [2024-10-07 07:49:21.099785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.433 [2024-10-07 07:49:21.099795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.433 qpair failed and we were unable to recover it. 00:30:17.433 [2024-10-07 07:49:21.099987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.433 [2024-10-07 07:49:21.100242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.433 [2024-10-07 07:49:21.100253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.433 qpair failed and we were unable to recover it. 00:30:17.433 [2024-10-07 07:49:21.100454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.433 [2024-10-07 07:49:21.100710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.433 [2024-10-07 07:49:21.100720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.433 qpair failed and we were unable to recover it. 00:30:17.433 [2024-10-07 07:49:21.100943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.433 [2024-10-07 07:49:21.101235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.433 [2024-10-07 07:49:21.101246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.433 qpair failed and we were unable to recover it. 00:30:17.433 [2024-10-07 07:49:21.101439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.433 [2024-10-07 07:49:21.101729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.101739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.102035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.102363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.102395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.102713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.102973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.103004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.103240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.103528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.103559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.103843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.104089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.104121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.104418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.104727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.104758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.105066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.105210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.105221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.105494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.105685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.105695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.105901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.106214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.106246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.106492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.106728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.106759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.107043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.107384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.107415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.107670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.107959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.107989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.108254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.108444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.108474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.108787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.108987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.108997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.109243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.109495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.109525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.109785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.110009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.110040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.110291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.110578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.110608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.110878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.111152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.111183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.111504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.111818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.111849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.112181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.112495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.112526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.112873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.113175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.113207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.113446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.113732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.113742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.114039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.114302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.114334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.114649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.114978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.434 [2024-10-07 07:49:21.115009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.434 qpair failed and we were unable to recover it. 00:30:17.434 [2024-10-07 07:49:21.115253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.115586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.115617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.115855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.116138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.116171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.116431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.116745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.116774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.117030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.117376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.117409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.117712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.117918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.117928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.118197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.118479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.118510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.118778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.119031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.119040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.119239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.119459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.119495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.119749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.119997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.120028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.120299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.120470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.120501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.120714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.120983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.121013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.121219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.121449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.121479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.121801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.122041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.122051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.122367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.122629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.122659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.122970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.123304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.123337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.123574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.123809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.123839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.124160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.124470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.124501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.124784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.125078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.125116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.125458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.125755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.125785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.126135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.126332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.126363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.126590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.126896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.126927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.127244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.127555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.127586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.435 qpair failed and we were unable to recover it. 00:30:17.435 [2024-10-07 07:49:21.127899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.128236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.435 [2024-10-07 07:49:21.128268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.128576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.128867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.128898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.129131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.129404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.129435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.129678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.130012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.130042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.130304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.130620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.130651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.130878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.131180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.131222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.131544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.131738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.131769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.132110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.132371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.132402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.132709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.132943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.132974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.133289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.133520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.133551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.133779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.134007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.134037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.134228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.134450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.134481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.134772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.135107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.135139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.135454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.135765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.135775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.136074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.136279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.136290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.136466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.136754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.136767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.137039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.137360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.137392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.137645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.137886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.137917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.138185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.138430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.138462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.138785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.139097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.139130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.139449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.139679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.139709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.140033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.140293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.140325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.140632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.140927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.140957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.141299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.141555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.141585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.436 qpair failed and we were unable to recover it. 00:30:17.436 [2024-10-07 07:49:21.141824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.436 [2024-10-07 07:49:21.142137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.142170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.142412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.142727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.142758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.142950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.143263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.143296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.143566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.143805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.143836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.144068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.144257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.144267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.144458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.144803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.144833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.145131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.145399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.145409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.145668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.145953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.145984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.146287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.146531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.146562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.146864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.147201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.147234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.147570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.147803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.147834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.148092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.148335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.148366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.148640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.148992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.149022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.149379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.149668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.149698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.149960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.150283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.150315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.150632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.150953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.150984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.151276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.151518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.151548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.151843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.152130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.152161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.152510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.152765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.152796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.153031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.153336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.153368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.153681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.153992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.154023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.437 qpair failed and we were unable to recover it. 00:30:17.437 [2024-10-07 07:49:21.154300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.437 [2024-10-07 07:49:21.154621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.154652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.154980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.155323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.155355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.155661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.155960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.155990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.156240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.156553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.156584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.156925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.157236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.157269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.157588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.157830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.157840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.158148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.158391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.158423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.158739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.159051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.159095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.159421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.159731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.159762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.160138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.160381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.160413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.160721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.160911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.160921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.161195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.161400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.161411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.161634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.161923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.161954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.162253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.162493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.162524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.162841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.163078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.163110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.163363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.163678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.163709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.164013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.164344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.164377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.164694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.165002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.165033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.165273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.165588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.165619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.165939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.166273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.166306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.166620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.166881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.166912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.167214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.167528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.167559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.167877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.168118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.168150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.438 [2024-10-07 07:49:21.168397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.168685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.438 [2024-10-07 07:49:21.168716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.438 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.169092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.169385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.169416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.169742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.169974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.170004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.170329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.170568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.170598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.170897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.171166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.171198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.171433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.171699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.171730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.171970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.172187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.172198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.172447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.172639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.172648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.172857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.173141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.173172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.173361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.173656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.173687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.174026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.174334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.174366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.174690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.174931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.174962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.175272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.175592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.175623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.175887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.176174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.176206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.176499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.176799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.176830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.177164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.177385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.177416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.177733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.178066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.178099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.439 qpair failed and we were unable to recover it. 00:30:17.439 [2024-10-07 07:49:21.178343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.178651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.439 [2024-10-07 07:49:21.178682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.178931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.179213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.179224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.179526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.179872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.179903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.180137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.180342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.180373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.180619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.180955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.180986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.181345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.181565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.181596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.181825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.182149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.182182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.182477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.182715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.182747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.183017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.183247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.183279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.183465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.183753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.183784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.184103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.184426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.184457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.184787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.185101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.185133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.185428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.185674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.185706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.185932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.186259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.186292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.186566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.186878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.186909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.187153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.187473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.187504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.187824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.188007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.188038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.188321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.188632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.188664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.188915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.189200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.189211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.189409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.189719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.189729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.190008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.190276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.190308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.190630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.190946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.190978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.191308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.191596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.440 [2024-10-07 07:49:21.191627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.440 qpair failed and we were unable to recover it. 00:30:17.440 [2024-10-07 07:49:21.191930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.192230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.192262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.192533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.192893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.192924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.193238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.193490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.193521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.193783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.194005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.194035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.194372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.194613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.194643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.194937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.195229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.195261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.195537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.195850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.195881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.196208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.196449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.196479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.196799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.197074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.197107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.197356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.197596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.197626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.197944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.198278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.198311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.198628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.198943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.198973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.199271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.199502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.199533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.199805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.200117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.200127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.200406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.200695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.200727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.201097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.201416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.201466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.201792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.202100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.202133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.202310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.202623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.202654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.202979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.203308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.203341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.203661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.203903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.203933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.204231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.204548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.204579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.204884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.205193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.441 [2024-10-07 07:49:21.205225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.441 qpair failed and we were unable to recover it. 00:30:17.441 [2024-10-07 07:49:21.205564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.205876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.205906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.206175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.206508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.206538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.206854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.207039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.207078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.207328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.207637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.207669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.208002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.208176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.208209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.208391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.208627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.208658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.208982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.209335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.209369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.209691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.210017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.210048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.210368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.210602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.210634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.210868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.211092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.211124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.211442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.211767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.211797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.212036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.212340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.212372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.212686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.213025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.213056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.213402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.213669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.213700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.214030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.214346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.214376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.214621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.214910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.214945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.215187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.215510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.215547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.215874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.216194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.216227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.216477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.216760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.216791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.217142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.217314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.217345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.217644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.217976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.218007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.218258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.218594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.218625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.218868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.219170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.219202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.442 qpair failed and we were unable to recover it. 00:30:17.442 [2024-10-07 07:49:21.219387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.219573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.442 [2024-10-07 07:49:21.219603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.219927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.220245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.220255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.220555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.220800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.220832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.221117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.221438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.221474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.221710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.222015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.222046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.222358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.222702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.222733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.223075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.223407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.223438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.223783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.224122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.224155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.224489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.224805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.224836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.225097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.225413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.225445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.225789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.226100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.226132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.226392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.226647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.226658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.226944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.227183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.227216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.227519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.227814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.227851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.228081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.228356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.228387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.228662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.228922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.228953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.229230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.229458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.229490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.229870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.230191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.230224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.230472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.230661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.230672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.230874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.231214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.231248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.231485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.231872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.231903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.232129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.232389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.232421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.443 qpair failed and we were unable to recover it. 00:30:17.443 [2024-10-07 07:49:21.232672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.443 [2024-10-07 07:49:21.232921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.232952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.233300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.233575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.233616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.233884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.234199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.234246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.234456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.234715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.234747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.235056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.235262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.235274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.235471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.235733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.235766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.236031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.236316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.236350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.236542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.236836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.236867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.237044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.237348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.237359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.237544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.237823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.237855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.238136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.238440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.238471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.238724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.238980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.239011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.239358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.239528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.239559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.239919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.240221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.240254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.240598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.240906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.240938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.241288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.241589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.241620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.241928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.242171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.242204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.242484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.242758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.242790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.243122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.243392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.243424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.243754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.243982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.244013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.244397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.244638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.244669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.244918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.245186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.245197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.444 qpair failed and we were unable to recover it. 00:30:17.444 [2024-10-07 07:49:21.245360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.444 [2024-10-07 07:49:21.245571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.245603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.245845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.246166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.246199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.246529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.246778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.246809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.247081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.247330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.247361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.247677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.247926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.247957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.248241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.248505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.248537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.248830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.249082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.249116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.249440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.249691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.249722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.250071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.250205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.250216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.250445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.250764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.250795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.251108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.251343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.251354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.251663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.251992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.252024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.252291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.252597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.252629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.252889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.253117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.253150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.253356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.253567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.253600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.253765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.254009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.254042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.254280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.254546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.254578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.254844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.255147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.255180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.445 [2024-10-07 07:49:21.255368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.255599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.445 [2024-10-07 07:49:21.255631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.445 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.255912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.256267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.256300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.256563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.256898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.256930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.257267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.257570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.257601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.257956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.258279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.258313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.258573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.258836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.258878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.259139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.259293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.259304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.259527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.259687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.259698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.259891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.260104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.260115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.260259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.260471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.260485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.260699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.260933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.260944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.261244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.261491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.261523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.261843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.262150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.262183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.262549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.262914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.262946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.263208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.263431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.263463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.263821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.264146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.264181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.264447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.264627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.264658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.265038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.265389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.265422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.265685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.265940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.265972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.266279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.266608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.266639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.266972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.267285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.267298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.267564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.267903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.267934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.268259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.268422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.268434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.446 qpair failed and we were unable to recover it. 00:30:17.446 [2024-10-07 07:49:21.268703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.269017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.446 [2024-10-07 07:49:21.269028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.269323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.269514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.269547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.269897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.270206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.270241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.270584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.270830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.270862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.271166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.271431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.271465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.271740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.272077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.272109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.272360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.272659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.272692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.272948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.273280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.273313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.273625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.273937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.273968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.274310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.274524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.274535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.274676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.274967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.274999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.275369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.275697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.275728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.276003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.276328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.276340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.276548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.276715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.276726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.276994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.277135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.277146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.277351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.277522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.277553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.277888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.278126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.278137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.278345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.278570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.278602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.278831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.279161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.279195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.279412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.279626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.279637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.279861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.280159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.280192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.280383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.280626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.280657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.280906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.281242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.281275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.281515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.281736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.281768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.447 qpair failed and we were unable to recover it. 00:30:17.447 [2024-10-07 07:49:21.282118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.447 [2024-10-07 07:49:21.282453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.282485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.282721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.283051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.283097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.283413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.283614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.283646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.283883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.284182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.284193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.284439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.284760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.284791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.284983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.285260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.285272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.285574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.285956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.285987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.286279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.286487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.286518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.286836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.287156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.287189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.287515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.287862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.287894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.288236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.288581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.288612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.288861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.289181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.289215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.289406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.289640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.289672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.290027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.290235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.290269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.290574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.290828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.290859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.291213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.291555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.291586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.291865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.292171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.292182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.292397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.292611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.292622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.292895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.293194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.293204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.293467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.293677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.293688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.293909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.294133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.294144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.294459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.294701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.294732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.295010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.295278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.295311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.448 qpair failed and we were unable to recover it. 00:30:17.448 [2024-10-07 07:49:21.295630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.448 [2024-10-07 07:49:21.295981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.296012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.296226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.296484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.296495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.296778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.297099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.297133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.297381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.297571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.297582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.297826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.298025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.298055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.298414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.298685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.298716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.298975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.299281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.299316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.299611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.299894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.299931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.300155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.300372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.300404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.300600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.300835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.300866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.301184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.301442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.301474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.301678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.301981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.302012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.302384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.302693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.302725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.303069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.303320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.303352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.303615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.304007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.304039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.304334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.304708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.304740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.304975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.305253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.305287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.305563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.305823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.305854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.306109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.306406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.306438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.306699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.306950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.306981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.307264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.307478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.307509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.307863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.308037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.308079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.449 [2024-10-07 07:49:21.308323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.308475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.449 [2024-10-07 07:49:21.308489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.449 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.308682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.308918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.308949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.309291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.309542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.309573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.309851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.310177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.310210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.310478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.310694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.310725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.311035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.311340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.311352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.311565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.311728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.311739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.311960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.312226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.312238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.312451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.312716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.312749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.313051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.313331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.313363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.313555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.313818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.313855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.314094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.314366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.314409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.314572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.314777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.314808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.315158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.315488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.315521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.315878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.316219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.316232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.316419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.316690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.316721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.316961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.317208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.317219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.317436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.317695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.317733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.317968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.318291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.318325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.318535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.318801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.318833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.319091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.319350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.319388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.319663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.319896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.319927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.320192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.320446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.320477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.450 qpair failed and we were unable to recover it. 00:30:17.450 [2024-10-07 07:49:21.320754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.321120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.450 [2024-10-07 07:49:21.321153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.321486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.321690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.321721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.321996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.322348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.322382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.322666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.323031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.323089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.323300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.323573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.323605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.323944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.324271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.324306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.324537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.324756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.324789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.324982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.325300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.325339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.325665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.326017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.326048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.326294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.326502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.326513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.326795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.326981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.327012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.327345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.327600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.327631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.327888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.328139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.328172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.328417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.328775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.328807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.329093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.329365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.329396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.329597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.329933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.329965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.330263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.330614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.330647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.330957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.331299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.331333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.331650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.331971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.332002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.332341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.332591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.332624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.332980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.333225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.333258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.451 [2024-10-07 07:49:21.333558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.333888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.451 [2024-10-07 07:49:21.333930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.451 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.334259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.334451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.334483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.334867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.335177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.335210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.335414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.335621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.335652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.335905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.336229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.336262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.336475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.336726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.336757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.337027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.337367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.337400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.337673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.337865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.337897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.338247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.338459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.338490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.338741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.339076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.339110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.339375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.339704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.339735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.340075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.340290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.340323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.340521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.340851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.340883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.341220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.341540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.341571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.341846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.342083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.342116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.342428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.342699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.342732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.343005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.343233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.343245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.343464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.343810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.343841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.344136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.344386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.344418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.344715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.345046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.345091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.345359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.345545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.345577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.345887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.346140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.346152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.346440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.346672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.346704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.452 qpair failed and we were unable to recover it. 00:30:17.452 [2024-10-07 07:49:21.347036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.347260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.452 [2024-10-07 07:49:21.347271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.347564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.347833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.347865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.348196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.348432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.348465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.348785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.349072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.349105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.349331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.349650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.349682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.349965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.350245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.350278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.350538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.350900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.350932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.351208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.351479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.351511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.351845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.352177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.352211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.352478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.352860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.352893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.353167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.353425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.353437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.353674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.353871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.353902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.354225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.354514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.354545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.354739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.355080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.355113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.355328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.355574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.355606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.355925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.356286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.356319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.356515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.356713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.356745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.357000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.357256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.357288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.357478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.357660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.357691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.358012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.358301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.358312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.358488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.358797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.358830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.359089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.359421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.359453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.359696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.360002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.360034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.453 [2024-10-07 07:49:21.360305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.360555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.453 [2024-10-07 07:49:21.360587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.453 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.360958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.361240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.361273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.361569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.361744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.361775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.362055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.362291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.362302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.362519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.362657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.362690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.362946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.363125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.363158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.363360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.363554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.363585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.363831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.364094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.364127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.364379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.364545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.364577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.364924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.365190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.365224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.365412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.365592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.365623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.365970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.366205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.366237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.366568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.366895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.366928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.367239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.367562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.367598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.367862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.368186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.368197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.368427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.368672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.368704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.369080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.369319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.369351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.369688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.370009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.370040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.370371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.370571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.370582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.370899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.371253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.371287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.371549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.371731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.371762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.371954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.372307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.372348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.372566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.372825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.372871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.454 qpair failed and we were unable to recover it. 00:30:17.454 [2024-10-07 07:49:21.373183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.454 [2024-10-07 07:49:21.373440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.373473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.373663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.373983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.374014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.374252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.374493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.374505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.374625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.374833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.374864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.375220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.375464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.375496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.375686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.376044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.376185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.376471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.376701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.376732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.377075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.377395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.377427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.377617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.377962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.377994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.378237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.378490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.378522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.378864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.379181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.379214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.379415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.379763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.379795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.380165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.380416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.380448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.380760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.381096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.381108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.381248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.381473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.381483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.381695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.381887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.381897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.382191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.382395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.382426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.382690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.382939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.382971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.383325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.383617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.383650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.383987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.384313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.384347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.384552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.384796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.384828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.385162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.385353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.385384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.385634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.385961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.455 [2024-10-07 07:49:21.385992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.455 qpair failed and we were unable to recover it. 00:30:17.455 [2024-10-07 07:49:21.386325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.386587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.386619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.456 qpair failed and we were unable to recover it. 00:30:17.456 [2024-10-07 07:49:21.386894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.387164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.387204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.456 qpair failed and we were unable to recover it. 00:30:17.456 [2024-10-07 07:49:21.387450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.387660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.387671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.456 qpair failed and we were unable to recover it. 00:30:17.456 [2024-10-07 07:49:21.387796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.388065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.388077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.456 qpair failed and we were unable to recover it. 00:30:17.456 [2024-10-07 07:49:21.388288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.388449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.388460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.456 qpair failed and we were unable to recover it. 00:30:17.456 [2024-10-07 07:49:21.388666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.388997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.389009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.456 qpair failed and we were unable to recover it. 00:30:17.456 [2024-10-07 07:49:21.389283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.389488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.389520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.456 qpair failed and we were unable to recover it. 00:30:17.456 [2024-10-07 07:49:21.389855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.390174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.390202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.456 qpair failed and we were unable to recover it. 00:30:17.456 [2024-10-07 07:49:21.390346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.390628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.390638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.456 qpair failed and we were unable to recover it. 00:30:17.456 [2024-10-07 07:49:21.390897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.391172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.391184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.456 qpair failed and we were unable to recover it. 00:30:17.456 [2024-10-07 07:49:21.391475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.391682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.391714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.456 qpair failed and we were unable to recover it. 00:30:17.456 [2024-10-07 07:49:21.392047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.392263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.392296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.456 qpair failed and we were unable to recover it. 00:30:17.456 [2024-10-07 07:49:21.392497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.392768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.392800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.456 qpair failed and we were unable to recover it. 00:30:17.456 [2024-10-07 07:49:21.393110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.393360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.456 [2024-10-07 07:49:21.393392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.456 qpair failed and we were unable to recover it. 00:30:17.726 [2024-10-07 07:49:21.393646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.726 [2024-10-07 07:49:21.393856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.726 [2024-10-07 07:49:21.393867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.726 qpair failed and we were unable to recover it. 00:30:17.726 [2024-10-07 07:49:21.394095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.726 [2024-10-07 07:49:21.394311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.726 [2024-10-07 07:49:21.394325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.726 qpair failed and we were unable to recover it. 00:30:17.726 [2024-10-07 07:49:21.394591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.726 [2024-10-07 07:49:21.394740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.726 [2024-10-07 07:49:21.394752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.726 qpair failed and we were unable to recover it. 00:30:17.726 [2024-10-07 07:49:21.395031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.726 [2024-10-07 07:49:21.395243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.726 [2024-10-07 07:49:21.395255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.726 qpair failed and we were unable to recover it. 00:30:17.726 [2024-10-07 07:49:21.395421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.395560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.395571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.395822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.396094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.396106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.396310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.396511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.396544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.396856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.397161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.397194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.397454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.397634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.397644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.397884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.398122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.398155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.398414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.398683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.398715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.399002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.399257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.399296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.399621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.399942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.399973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.400218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.400462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.400494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.400832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.401132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.401144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.401423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.401792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.401823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.402153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.402406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.402438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.402695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.402954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.402985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.403316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.403602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.403633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.403994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.404246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.404280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.404477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.404784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.404817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.405056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.405313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.405351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.405703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.406024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.406056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.406407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.406615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.406646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.406919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.407158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.407191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.407500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.407701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.407733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.408077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.408355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.408387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.408637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.408953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.408986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.409375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.409708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.409739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.409978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.410287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.727 [2024-10-07 07:49:21.410320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.727 qpair failed and we were unable to recover it. 00:30:17.727 [2024-10-07 07:49:21.410532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.410796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.410828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.411135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.411401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.411439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.411807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.412100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.412133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.412488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.412694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.412726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.412915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.413250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.413283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.413617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.413941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.413972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.414275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.414512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.414523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.414821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.415177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.415210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.415537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.415866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.415897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.416103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.416408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.416440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.416730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.416979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.417011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.417272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.417526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.417558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.417960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.418266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.418298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.418606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.418780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.418811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.419027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.419247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.419280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.419482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.419811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.419842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.420185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.420414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.420425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.420583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.420860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.420890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.421290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.421549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.421581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.421891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.422164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.422196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.422471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.422680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.422719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.423018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.423371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.423405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.423692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.423946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.423978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.424218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.424397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.424428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.424625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.424813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.424845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.425156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.425479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.425510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.728 qpair failed and we were unable to recover it. 00:30:17.728 [2024-10-07 07:49:21.425814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.728 [2024-10-07 07:49:21.426054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.426101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.426409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.426664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.426696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.427023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.427354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.427388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.427574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.427837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.427868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.428232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.428535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.428566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.428814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.429181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.429215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.429548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.429759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.429769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.429977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.430214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.430225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.430375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.430568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.430578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.430745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.431080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.431112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.431383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.431663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.431695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.431875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.432134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.432169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.432379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.432576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.432607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.432921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.433203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.433238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.433503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.433811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.433844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.434134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.434371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.434382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.434548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.434835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.434867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.435135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.435439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.435471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.435754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.436028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.436074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.436338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.436567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.436598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.436865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.437103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.437135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.437382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.437685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.437717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.437976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.438235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.438269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.438603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.438947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.438979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.439304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.439516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.439547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.439794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.440029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.440072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.440335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.440531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.729 [2024-10-07 07:49:21.440561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.729 qpair failed and we were unable to recover it. 00:30:17.729 [2024-10-07 07:49:21.440930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.441110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.441142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.441462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.441709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.441741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.442082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.442244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.442277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.442528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.442689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.442721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.443026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.443244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.443278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.443521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.443680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.443711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.443963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.444199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.444231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.444515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.444848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.444881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.445204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.445554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.445587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.445845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.446154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.446187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.446448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.446765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.446796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.447120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.447372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.447402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.447660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.447991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.448022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.448237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.448491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.448523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.448898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.449202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.449235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.449485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.449688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.449719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.450056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.450323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.450355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.450571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.450911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.450943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.451270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.451469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.451500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.451757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.452073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.452105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.452289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.452537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.452567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.452838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.453030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.453072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.453317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.453520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.453551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.453817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.454023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.454033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.454209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.454426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.454459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.454675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.454917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.454949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.730 qpair failed and we were unable to recover it. 00:30:17.730 [2024-10-07 07:49:21.455267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.455458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.730 [2024-10-07 07:49:21.455468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.455602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.455833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.455843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.456013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.456251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.456284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.456545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.456688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.456698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.456907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.457248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.457283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.457594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.457767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.457799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.457995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.458287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.458320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.458597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.458883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.458914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.459216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.459480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.459512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.459836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.460165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.460213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.460574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.460823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.460854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.461133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.461341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.461372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.461621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.461887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.461919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.462169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.462434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.462474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.462665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.462920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.462951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.463266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.463470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.463501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.463841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.464154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.464188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.464386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.464650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.464684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.464832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.465043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.465101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.465394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.465646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.465678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.731 qpair failed and we were unable to recover it. 00:30:17.731 [2024-10-07 07:49:21.465999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.731 [2024-10-07 07:49:21.466286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.466320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.466598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.466880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.466912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.467193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.467398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.467430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.467680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.468036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.468080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.468370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.468544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.468575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.468987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.469264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.469297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.469499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.469741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.469773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.470053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.470314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.470346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.470561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.470727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.470759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.471106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.471356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.471387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.471720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.472043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.472087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.472407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.472710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.472742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.473004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.473253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.473285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.473541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.473690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.473701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.473984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.474298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.474332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.474672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.474906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.474938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.475237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.475506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.475538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.475919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.476126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.476159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.476349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.476652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.476684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.477021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.477298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.477331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.477593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.477807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.477840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.478118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.478381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.478413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.478699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.478933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.478964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.479244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.479607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.479639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.479892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.480201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.480235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.480443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.480678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.480710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.481055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.481392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.481424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.481682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.482028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.482069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.732 [2024-10-07 07:49:21.482401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.482703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.732 [2024-10-07 07:49:21.482735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.732 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.483019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.483341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.483375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.483563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.483766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.483798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.484084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.484363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.484405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.484523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.484835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.484868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.485139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.485380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.485417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.485769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.486118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.486152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.486492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.486820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.486852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.487185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.487434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.487467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.487826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.488079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.488111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.488356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.488601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.488633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.488824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.489053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.489100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.489284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.489541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.489573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.489828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.490055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.490100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.490346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.490623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.490655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.490857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.491091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.491130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.491390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.491635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.491666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.491900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.492157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.492190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.492457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.492746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.492779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.492959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.493221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.493254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.493441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.493622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.493653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.493919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.494174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.494208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.494522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.494842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.494874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.495165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.495427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.495460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.495791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.496111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.496146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.496425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.496758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.496795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.497113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.497398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.497429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.497757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.498019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.498050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.498334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.498602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.498632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.733 qpair failed and we were unable to recover it. 00:30:17.733 [2024-10-07 07:49:21.498917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.733 [2024-10-07 07:49:21.499220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.499253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.499440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.499701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.499732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.500009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.500148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.500182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.500359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.500616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.500657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.500918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.501215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.501247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.501503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.501850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.501882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.502243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.502497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.502535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.502898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.503169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.503203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.503387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.503689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.503720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.503968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.504188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.504221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.504457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.504802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.504834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.505081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.505386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.505430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.505589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.505836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.505867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.506216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.506475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.506507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.506719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.506995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.507027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.507369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.507684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.507715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.507977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.508295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.508328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.508597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.508848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.508879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.509151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.509402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.509433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.509685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.509905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.509915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.510132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.510275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.510286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.510442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.510674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.510706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.510958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.511223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.511256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.511507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.511784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.511815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.512041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.512339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.512373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.512628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.512871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.512908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.513218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.513424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.513456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.513635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.513941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.513972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.514344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.514538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.514570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.734 qpair failed and we were unable to recover it. 00:30:17.734 [2024-10-07 07:49:21.514855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.734 [2024-10-07 07:49:21.515147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.515160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.515454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.515731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.515762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.516134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.516320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.516352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.516539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.516727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.516758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.517111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.517415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.517447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.517753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.518002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.518034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.518258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.518500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.518532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.518732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.518986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.518996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.519295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.519476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.519507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.519873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.520129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.520163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.520428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.520697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.520729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.521033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.521293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.521327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.521615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.521978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.522010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.522304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.522553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.522584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.522851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.523088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.523099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.523259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.523479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.523511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.523824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.524132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.524166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.524410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.524717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.524748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.525087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.525340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.525372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.525584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.525780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.525811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.526099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.526345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.526376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.526626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.526945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.526976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.527179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.527527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.527559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.527878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.528158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.528192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.528456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.528786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.528818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.529158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.529458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.529491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.529676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.529919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.529952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.530283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.530638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.530670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.530937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.531219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.735 [2024-10-07 07:49:21.531252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.735 qpair failed and we were unable to recover it. 00:30:17.735 [2024-10-07 07:49:21.531568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.531818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.531849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.532178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.532483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.532514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.532803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.532959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.532990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.533197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.533450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.533483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.533755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.533977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.534008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.534303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.534546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.534577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.534917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.535190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.535223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.535481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.535725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.535736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.536053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.536336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.536366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.536588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.536868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.536898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.537128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.537381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.537412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.537645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.537931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.537964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.538295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.538492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.538528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.538824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.539090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.539124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.539460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.539713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.539745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.540074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.540435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.540467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.540815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.541053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.541068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.541239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.541457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.541488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.541686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.541990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.542021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.542364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.542609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.542640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.542821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.543086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.543118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.543378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.543633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.543664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.543917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.544185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.544219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.544477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.544682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.544714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.544972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.545208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.545219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.736 qpair failed and we were unable to recover it. 00:30:17.736 [2024-10-07 07:49:21.545505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.736 [2024-10-07 07:49:21.545819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.545850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.546228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.546476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.546507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.546821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.547128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.547162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.547505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.547713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.547745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.548073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.548271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.548281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.548471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.548637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.548668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.549025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.549210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.549243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.549529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.549822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.549853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.550170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.550418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.550449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.550765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.551004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.551035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.551385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.551690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.551721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.552081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.552340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.552371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.552707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.552961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.552991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.553261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.553515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.553546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.553853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.554110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.554145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.554436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.554731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.554762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.555082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.555198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.555208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.555420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.555599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.555629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.555960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.556242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.556275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.556549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.556785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.556817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.557124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.557333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.557364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.557569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.557842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.557874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.558205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.558441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.558472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.558662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.559026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.559057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.559326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.559535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.559567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.559891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.560112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.560123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.560419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.560749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.560781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.561106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.561316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.561348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.561703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.561954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.737 [2024-10-07 07:49:21.561985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.737 qpair failed and we were unable to recover it. 00:30:17.737 [2024-10-07 07:49:21.562268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.562428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.562461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.562662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.562918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.562950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.563134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.563399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.563430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.563693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.564018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.564050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.564412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.564575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.564606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.564896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.565055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.565115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.565335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.565598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.565629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.565903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.566185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.566197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.566431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.566660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.566671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.566940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.567149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.567160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.567360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.567686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.567717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.567988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.568369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.568403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.568685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.568915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.568926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.569247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.569598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.569630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.569897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.570292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.570326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.570617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.570853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.570886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.571200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.571399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.571431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.571758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.571949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.571959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.572177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.572467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.572499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.572777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.573028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.573072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.573360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.573590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.573621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.573957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.574305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.574338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.574603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.574867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.574898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.575220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.575383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.575394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.575555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.575822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.575833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.576040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.576323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.576362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.576624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.576888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.576898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.577116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.577330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.577341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.738 [2024-10-07 07:49:21.577499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.577677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.738 [2024-10-07 07:49:21.577688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.738 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.577823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.578102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.578114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.578344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.578500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.578511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.578758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.579017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.579028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.579225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.579369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.579380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.579583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.579774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.579786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.580085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.580339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.580350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.580540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.580728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.580742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.581026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.581242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.581253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.581465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.581589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.581600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.581799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.581999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.582009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.582234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.582544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.582554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.582911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.583234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.583246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.583414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.583676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.583686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.583952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.584188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.584199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.584330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.584518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.584528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.584737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.584953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.584964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.585188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.585330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.585343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.585509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.585818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.585829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.586105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.586374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.586384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.586585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.586797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.586807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.587003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.587237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.587249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.587562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.587753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.587764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.588006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.588293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.588304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.588592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.588910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.588920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.589238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.589522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.589533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.589764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.590053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.590069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.590256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.590577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.590588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.590875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.591084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.591095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.739 [2024-10-07 07:49:21.591364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.591563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.739 [2024-10-07 07:49:21.591574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.739 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.591860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.592094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.592105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.592309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.592568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.592579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.592883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.593070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.593081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.593227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.593487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.593498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.593818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.594105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.594116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.594422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.594674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.594685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.594994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.595275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.595286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.595517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.595791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.595802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.596019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.596349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.596360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.596694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.596956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.596966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.597179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.597460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.597471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.597787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.597986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.597997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.598259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.598470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.598482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.598785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.598966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.598997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.599335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.599590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.599621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.599851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.600077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.600110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.600397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.600734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.600766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.601119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.601410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.601441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.601811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.602142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.602175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.602430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.602768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.602800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.603112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.603356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.603387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.603722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.604016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.604048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.604340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.604624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.604655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.604933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.605248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.605280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.605620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.605860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.605891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.740 qpair failed and we were unable to recover it. 00:30:17.740 [2024-10-07 07:49:21.606240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.606568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.740 [2024-10-07 07:49:21.606600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.606884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.607208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.607241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.607587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.607848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.607879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.608080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.608346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.608378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.608557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.608792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.608824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.609038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.609327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.609360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.609558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.609881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.609915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.610260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.610564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.610596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.610853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.611106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.611139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.611396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.611639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.611669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.611982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.612224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.612256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.612476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.612816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.612847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.613116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.613378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.613411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.613603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.613903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.613936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.614262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.614581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.614613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.614958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.615272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.615305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.615590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.615960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.615992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.616232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.616447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.616479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.616678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.617041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.617084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.617290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.617592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.617624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.617956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.618158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.618191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.618472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.618809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.618841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.619181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.619443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.619475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.619741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.619998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.620030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.620353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.620548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.620579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.620918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.621257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.621269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.621405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.621675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.621708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.621935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.622247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.622279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.741 [2024-10-07 07:49:21.622485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.622742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.741 [2024-10-07 07:49:21.622773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.741 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.623085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.623270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.623302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.623482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.623720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.623753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.624045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.624293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.624325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.624587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.624849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.624882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.625180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.626666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.626709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.627070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.627289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.627300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.628186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.628415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.628431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.628660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.628831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.628878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.629133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.629400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.629433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.629633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.629813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.629844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.630164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.630361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.630394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.630656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.631564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.631591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.631903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.634001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.634035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.634353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.634628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.634663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.635743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.636082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.636114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.636291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.637154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.637183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.637444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.637649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.637682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.638692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.639024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.639077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.639432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.639644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.639678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.639877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.640178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.640212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.640499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.640696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.640729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.640920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.641209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.641243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.641514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.641761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.641795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.642774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.643044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.643066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.643266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.644499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.644527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.644771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.644993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.645026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.646432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.646704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.646719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.647016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.648734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.742 [2024-10-07 07:49:21.648765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.742 qpair failed and we were unable to recover it. 00:30:17.742 [2024-10-07 07:49:21.649035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.649361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.649396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.649614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.649919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.649954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.650170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.650334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.650345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.650630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.650864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.650897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.651096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.651358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.651369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.651614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.651807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.651838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.652096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.652311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.652343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.653326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.653562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.653577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.653768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.653912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.653922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.654072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.654252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.654284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.654510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.654766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.654800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.655049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.655249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.655283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.655465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.655617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.655650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.655846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.656130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.656142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.656365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.656499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.656511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.656657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.656865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.656877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.657033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.657185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.657199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.657350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.657478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.657490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.657698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.657865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.657897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.658094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.658271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.658303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.658557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.658876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.658909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.659095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.659345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.659379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.659635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.659804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.659838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.660092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.660267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.660299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.660479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.660731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.660765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.661052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.661208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.661220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.661431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.661643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.661656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.661834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.662036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.662048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.662258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.662450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.743 [2024-10-07 07:49:21.662462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:17.743 qpair failed and we were unable to recover it. 00:30:17.743 [2024-10-07 07:49:21.662663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.662847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.662886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.663119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.664766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.664805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.665127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.665269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.665285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.665458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.665624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.665639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.665804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.666021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.666051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.666260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.666433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.666464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.666723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.667083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.667115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.667285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.667514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.667553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.667798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.667919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.667949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.668142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.668367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.668400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.668648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.668815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.668847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.669043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.669393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.669426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.669688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.669855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.669886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.670174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.670380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.670397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.670545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.670718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.670749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.671068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.671227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.671257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.671417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.671591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.671622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.671786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.671962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.671999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.672280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.672443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.672474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.672666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.672903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.672935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.673121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.673290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.673322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.673557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.673798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.673829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.674012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.674234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.674267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.674446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.674613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.674644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.674894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.675158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.675175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.675312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.675461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.675492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.675733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.675974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.676006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.676220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.676456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.676495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.676665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.676823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.744 [2024-10-07 07:49:21.676855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.744 qpair failed and we were unable to recover it. 00:30:17.744 [2024-10-07 07:49:21.677089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.677310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.677326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.677467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.678432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.678466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.678715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.678926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.678957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.679176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.679422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.679453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.679705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.679870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.679901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.680156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.680331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.680363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.680600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.680865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.680896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.681083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.681339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.681369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.681556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.681844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.681883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.682183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.682330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.682347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.682570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.682809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.682842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.683120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.683291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.683322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.683643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.683884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.683914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.684149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.684371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.684386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.684484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.684747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.684763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.684924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.685074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.685091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.685295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.685494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.685510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.685701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.685856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.685887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:17.745 [2024-10-07 07:49:21.686081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.686375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.745 [2024-10-07 07:49:21.686392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:17.745 qpair failed and we were unable to recover it. 00:30:18.024 [2024-10-07 07:49:21.686700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.024 [2024-10-07 07:49:21.686915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.024 [2024-10-07 07:49:21.686931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.024 qpair failed and we were unable to recover it. 00:30:18.024 [2024-10-07 07:49:21.687089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.024 [2024-10-07 07:49:21.687249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.024 [2024-10-07 07:49:21.687265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.024 qpair failed and we were unable to recover it. 00:30:18.024 [2024-10-07 07:49:21.687457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.024 [2024-10-07 07:49:21.687651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.024 [2024-10-07 07:49:21.687668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.024 qpair failed and we were unable to recover it. 00:30:18.024 [2024-10-07 07:49:21.687882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.024 [2024-10-07 07:49:21.688007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.688022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.688173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.688391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.688407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.688566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.688794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.688810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.689018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.689333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.689366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.689555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.689794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.689824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.690081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.690323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.690353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.690602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.690839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.690874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.691196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.691447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.691480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.691791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.692120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.692152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.692398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.692592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.692622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.692851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.693115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.693131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.693247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.693464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.693480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.693636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.693959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.693990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.694260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.694558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.694589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.694848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.695083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.695116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.695361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.695632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.695662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.695983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.696268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.696303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.696486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.696798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.696828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.697010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.697145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.697161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.697378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.697616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.697632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.697861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.698180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.698212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.025 [2024-10-07 07:49:21.698412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.698657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.025 [2024-10-07 07:49:21.698687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.025 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.698931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.699154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.699186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.699377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.699515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.699531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.699736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.699891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.699909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.700226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.700395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.700426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.700671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.700971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.701001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.701196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.701370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.701401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.701701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.701959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.701989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.702166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.702422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.702452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.702700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.702931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.702961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.703202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.703378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.703408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.703642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.703795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.703825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.704020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.704210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.704240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.704430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.704616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.704645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.704885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.705045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.705090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.705276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.705499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.705529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.705856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.706024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.706039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.706284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.706532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.706561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.706888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.707043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.707088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.707268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.707443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.707458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.707591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.707781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.707811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.707991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.708132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.708149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.708308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.708517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.708532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.708751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.708985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.709015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.026 qpair failed and we were unable to recover it. 00:30:18.026 [2024-10-07 07:49:21.709266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.709451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.026 [2024-10-07 07:49:21.709482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.709714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.709972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.710002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.710215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.710375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.710390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.710604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.710748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.710784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.711053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.711292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.711322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.711590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.711743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.711774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.712027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.712356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.712389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.712564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.712757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.712787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.712964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.713145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.713176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.713399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.713517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.713548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.713728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.713876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.713907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.714154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.714425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.714456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.714624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.714871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.714911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.715114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.715252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.715267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.715422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.715616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.715632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.715833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.715999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.716015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.716151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.716305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.716320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.716469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.716677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.716708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.716869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.717038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.717087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.717288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.717424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.717463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.717694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.718000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.718016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.718325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.718450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.718466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.027 qpair failed and we were unable to recover it. 00:30:18.027 [2024-10-07 07:49:21.718688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.718954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.027 [2024-10-07 07:49:21.718971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.719182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.719414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.719443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.719628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.719878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.719908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.720179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.720489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.720520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.720843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.721073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.721090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.721302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.721548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.721579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.721877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.722034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.722074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.722303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.722464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.722494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.722791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.722957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.722988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.723284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.723430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.723460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.723687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.723921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.723952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.724204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.724344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.724360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.724574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.724806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.724836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.725104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.725251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.725267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.725408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.725621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.725636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.725841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.726035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.726050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.726273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.726427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.726443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.726606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.726917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.726947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.727134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.727318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.727348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.727575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.727861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.727877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.728026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.728233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.728249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.728524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.728722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.728737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.028 [2024-10-07 07:49:21.729015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.729213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.028 [2024-10-07 07:49:21.729245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.028 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.729417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.729592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.729623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.729853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.730104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.730135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.730380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.730625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.730655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.730919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.731088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.731104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.731327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.731503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.731534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.731765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.732100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.732131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.732330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.732557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.732587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.732779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.732945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.732976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.733317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.733619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.733650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.733895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.734214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.734246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.734508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.734772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.734802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.735105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.735260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.735291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.735639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.735904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.735935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.736115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.736291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.736321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.736554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.736693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.736709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.736848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.737115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.737146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.737334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.737582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.737612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.737840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.738017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.738048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.738401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.738650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.738681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.738925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.739099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.739132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.739301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.739632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.739662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.739955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.740247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.029 [2024-10-07 07:49:21.740291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.029 qpair failed and we were unable to recover it. 00:30:18.029 [2024-10-07 07:49:21.740541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.740801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.740830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.741015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.741357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.741388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.741614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.741853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.741884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.742074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.742305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.742336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.742553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.742761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.742791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.742982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.743250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.743292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.743581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.743793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.743827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.744014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.744354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.744385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.744675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.744798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.744813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.744976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.745278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.745311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.745634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.745949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.745979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.746268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.746478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.746493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.746743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.747043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.747085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.747333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.747671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.747701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.747937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.748205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.748237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.748481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.748773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.748814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.749110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.749405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.749436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.749676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.749821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.749851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.030 qpair failed and we were unable to recover it. 00:30:18.030 [2024-10-07 07:49:21.750179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.030 [2024-10-07 07:49:21.750350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.750381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.750553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 106226 Killed "${NVMF_APP[@]}" "$@" 00:30:18.031 [2024-10-07 07:49:21.750793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.750827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.751013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.751248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.751279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 07:49:21 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:30:18.031 [2024-10-07 07:49:21.751547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.751834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 07:49:21 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:18.031 [2024-10-07 07:49:21.751865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 07:49:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:18.031 [2024-10-07 07:49:21.752213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.752378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.752407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 07:49:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:18.031 [2024-10-07 07:49:21.752593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 07:49:21 -- common/autotest_common.sh@10 -- # set +x 00:30:18.031 [2024-10-07 07:49:21.752792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.752825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.753010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.753183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.753221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.753457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.753692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.753722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.753898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.754087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.754118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.754356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.754643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.754672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.754910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.755076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.755107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.755276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.755538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.755569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.755764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.756001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.756030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.756343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.756505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.756535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.756768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.757025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.757084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.757270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.757559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.757590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.757790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.758106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.758154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.758373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.758615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.758630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.758836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 07:49:21 -- nvmf/common.sh@469 -- # nvmfpid=107152 00:30:18.031 [2024-10-07 07:49:21.759073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.759092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 07:49:21 -- nvmf/common.sh@470 -- # waitforlisten 107152 00:30:18.031 [2024-10-07 07:49:21.759307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 07:49:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:18.031 [2024-10-07 07:49:21.759505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.759521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.031 [2024-10-07 07:49:21.759662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 07:49:21 -- common/autotest_common.sh@819 -- # '[' -z 107152 ']' 00:30:18.031 [2024-10-07 07:49:21.759862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.031 [2024-10-07 07:49:21.759893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.031 qpair failed and we were unable to recover it. 00:30:18.032 07:49:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.032 [2024-10-07 07:49:21.760141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 07:49:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:18.032 [2024-10-07 07:49:21.760298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.760336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 07:49:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.032 [2024-10-07 07:49:21.760603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 07:49:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:18.032 [2024-10-07 07:49:21.760888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.760919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 07:49:21 -- common/autotest_common.sh@10 -- # set +x 00:30:18.032 [2024-10-07 07:49:21.761187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.761425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.761456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.762170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.762385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.762407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.762623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.762814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.762831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.762969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.763166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.763185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.763384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.763598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.763629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.763788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.764006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.764038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.764288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.764502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.764517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.764707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.764970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.765001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.765277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.765453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.765469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.765752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.765970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.766001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.766178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.766487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.766531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.766672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.766889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.766926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.767247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.767426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.767442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.767672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.767924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.767955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.768260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.768497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.768527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.768774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.769040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.769095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.769297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.769511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.769527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.769655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.769907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.032 [2024-10-07 07:49:21.769938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.032 qpair failed and we were unable to recover it. 00:30:18.032 [2024-10-07 07:49:21.770100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.770276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.770307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.770530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.770735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.770766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.770950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.771185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.771217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.771380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.771656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.771691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.772025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.772299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.772330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.772500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.772757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.772788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.772976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.773263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.773295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.773454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.773575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.773590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.773778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.774077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.774110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.774280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.774458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.774488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.774755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.774999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.775029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.775196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.775317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.775360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.775600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.775833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.775864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.776190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.776361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.776393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.776577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.776728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.776758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.776985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.777273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.777305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.777556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.777704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.777734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.777908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.778155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.778186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.778428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.778583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.778612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.778793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.778966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.778994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.779330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.779538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.779554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.779700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.779906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.779937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.780258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.780479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.033 [2024-10-07 07:49:21.780509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.033 qpair failed and we were unable to recover it. 00:30:18.033 [2024-10-07 07:49:21.780686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.780876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.780906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.781093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.781282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.781313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.781456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.781727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.781742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.781983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.782154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.782186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.782409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.782541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.782572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.782894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.783069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.783102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.783330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.783642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.783672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.783992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.784236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.784267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.784509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.784802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.784832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.785133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.785426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.785457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.785776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.786074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.786106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.786432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.786619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.786648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.786791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.787023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.787054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.787302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.787495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.787512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.787706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.787987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.788018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.788188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.788432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.788463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.788757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.788998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.789029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.789289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.789522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.789553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.789790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.789981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.790012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.790251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.790536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.790565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.790871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.791035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.791076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.791269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.791522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.791553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.791799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.791966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.791996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.034 qpair failed and we were unable to recover it. 00:30:18.034 [2024-10-07 07:49:21.792238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.034 [2024-10-07 07:49:21.792412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.792451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.792701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.792882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.792912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.793111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.793284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.793316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.793516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.793749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.793780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.794046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.794297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.794328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.794681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.794987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.795017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.795293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.795410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.795440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.795593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.795822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.795852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.796101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.796293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.796323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.796496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.796640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.796671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.796891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.797114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.797146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.797278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.797469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.797484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.797644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.797862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.797892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.798131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.798367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.798398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.798641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.798903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.798933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.799190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.799381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.799416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.799692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.799813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.799828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.800046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.800279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.800310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.800599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.800854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.800892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.801162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.801324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.801355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.801531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.801695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.801725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.801880] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:18.035 [2024-10-07 07:49:21.801944] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.035 [2024-10-07 07:49:21.802019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.802145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.802180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.802407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.802626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.035 [2024-10-07 07:49:21.802657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.035 qpair failed and we were unable to recover it. 00:30:18.035 [2024-10-07 07:49:21.802867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.803154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.803185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.803365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.803476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.803491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.803698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.803924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.803955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.804190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.804408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.804425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.804566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.804723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.804756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.805008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.805239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.805273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.805452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.805674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.805706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.806020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.806161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.806194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.806457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.806749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.806779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.806960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.807180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.807211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.807528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.807698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.807728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.807951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.808434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.808480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.808663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.808812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.808856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.809015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.809265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.809299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.809547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.809755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.809771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.809973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.810122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.810139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.810432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.810627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.810643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.810812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.811015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.811047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.811293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.811551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.036 [2024-10-07 07:49:21.811582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.036 qpair failed and we were unable to recover it. 00:30:18.036 [2024-10-07 07:49:21.811742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.811913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.811943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.812201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.812445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.812484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.812744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.813054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.813095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.813269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.813494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.813534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.813742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.813945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.813960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.814176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.814327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.814343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.814553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.814742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.814758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.814968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.815091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.815107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.815325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.815573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.815604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.815845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.816100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.816132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.816443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.816659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.816690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.816993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.817140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.817173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.817517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.817686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.817702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.817887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.818173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.818205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.818544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.818734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.818750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.818950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.819157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.819174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.819314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.819523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.819553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.819779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.820074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.820106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.820285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.820437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.820467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.820784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.821100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.821133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.821362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.821664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.821700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.821876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.822081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.822113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.037 qpair failed and we were unable to recover it. 00:30:18.037 [2024-10-07 07:49:21.822361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.822579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.037 [2024-10-07 07:49:21.822609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.822866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.823158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.823189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.823410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.823671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.823686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.823874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.824000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.824037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.824232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.824462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.824493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.824679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.824905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.824937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.825104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.825279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.825311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.825540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.825824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.825855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.825995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.826234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.826266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.826443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.826665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.826695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.827008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.827239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.827272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.827497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.827653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.827684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.827909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.828114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.828146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.828341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.828584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.828602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.828830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.829078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.829095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.829236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.829445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.829477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.829650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.829932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.829963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.830143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.830373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.830405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.830731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.830961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.830992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.831227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.831371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.831402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.831684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.831871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.831886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.832085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.832308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.832339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.832505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.038 [2024-10-07 07:49:21.832677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.832709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.038 qpair failed and we were unable to recover it. 00:30:18.038 [2024-10-07 07:49:21.832998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.833239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.038 [2024-10-07 07:49:21.833277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.833525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.833736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.833767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.833931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.834166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.834199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.834318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.834532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.834548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.834830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.835049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.835103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.835262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.835569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.835600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.835785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.836018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.836052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.836300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.836427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.836459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.836650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.836769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.836784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.837020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.837158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.837174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.837364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.837569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.837587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.837849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.838126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.838141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.838235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.838399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.838414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.838530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.838666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.838682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.838890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.839046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.839073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.839168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.839302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.839317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.839521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.839809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.839823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.840043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.840175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.840190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.840392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.840517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.840532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.840687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.840826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.840840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.841050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.841269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.841291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.841490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.841636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.841651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.841783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.841966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.039 [2024-10-07 07:49:21.841981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.039 qpair failed and we were unable to recover it. 00:30:18.039 [2024-10-07 07:49:21.842109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.842240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.842256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.842458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.842737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.842752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.843008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.843195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.843211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.843447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.843668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.843683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.843895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.844174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.844190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.844421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.844615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.844630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.844753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.844967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.844982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.845143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.845298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.845313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.845586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.845722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.845737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.845857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.845989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.846004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.846235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.846422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.846436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.846610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.846705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.846720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.846860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.846998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.847014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.847168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.847306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.847320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.847455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.847585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.847600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.847824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.848012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.848027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.848291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.848490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.848506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.848715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.848856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.848872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.849033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.849306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.849322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.849431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.849571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.849586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.849769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.849888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.849903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.040 [2024-10-07 07:49:21.850091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.850228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.040 [2024-10-07 07:49:21.850243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.040 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.850436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.850570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.850585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.850809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.851069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.851085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.851350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.851460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.851475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.851674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.851822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.851837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.852054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.852221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.852237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.852439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.852644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.852658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.852797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.852995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.853009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.853146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.853355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.853370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.853577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.853716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.853731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.853925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.854212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.854228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.854362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.854467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.854483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.854616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.854746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.854762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.854960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.855187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.855202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.855485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.855698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.855713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.855932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.856075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.856091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.856284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.856422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.856437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.856639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.856825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.856840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.856990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.857101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.857117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.857362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.857497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.857513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.857720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.857931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.857946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.858140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.858423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.858438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.858710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.858870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.858885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.041 qpair failed and we were unable to recover it. 00:30:18.041 [2024-10-07 07:49:21.859074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.859252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.041 [2024-10-07 07:49:21.859268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.859476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.859727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.859743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.859951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.860097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.860113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.860311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.860459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.860474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.860663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.860868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.860884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.861043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.861237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.861253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.861467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.861742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.861757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.861957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.862152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.862168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.862359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.862547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.862562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.862713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.862928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.862944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.863079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.863207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.863223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.863425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.863577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.863591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.863847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.864069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.864085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.864284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.864491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.864507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.864706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.864858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.864873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.865085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.865317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.865333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.865487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.865619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.865634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.865854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.865976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.865991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.866250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.866454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.866469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.042 qpair failed and we were unable to recover it. 00:30:18.042 [2024-10-07 07:49:21.866658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.866878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.042 [2024-10-07 07:49:21.866898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.867107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.867312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.867327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.867518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.867666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.867681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.867811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.867946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.867960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.868228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.868361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.868377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.868587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.868728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.868743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.869004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.869207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.869223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.869390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.869597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.869612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.869880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.870080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.870096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.870271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.870485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.870499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.870757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.870945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.870961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.871135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.871294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.871309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.871458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.871668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.871685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.871787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.872055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.872077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.872258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.872465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.872480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.872723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.872962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.872976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.873136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.873351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.873362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.873499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.873641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.873651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.873774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.873959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.873970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.874176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.874297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.874307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.874528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.874719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.874729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.874860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.874991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.875000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.875132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.875333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.875344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.043 qpair failed and we were unable to recover it. 00:30:18.043 [2024-10-07 07:49:21.875612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.875722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.043 [2024-10-07 07:49:21.875732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.875857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.876018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.876028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.876207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.876333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.876345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.876541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.876664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.876674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.876869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.877004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.877014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.877218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.877358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.877371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.877560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.877649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:18.044 [2024-10-07 07:49:21.877665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.877675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.877854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.877972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.877982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.878173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.878419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.878429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.878677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.878921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.878933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.879056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.879199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.879211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.879391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.879642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.879652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.879786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.880001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.880012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.880199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.880406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.880417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.880546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.880710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.880721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.880903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.881040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.881051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.881177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.881357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.881368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.881549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.881766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.881776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.881898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.882092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.882103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.882239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.882366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.882378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.882575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.882837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.882847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.882980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.883107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.883119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.883382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.883574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.044 [2024-10-07 07:49:21.883585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.044 qpair failed and we were unable to recover it. 00:30:18.044 [2024-10-07 07:49:21.883788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.884005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.884016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.884140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.884240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.884251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.884392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.884645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.884656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.884873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.885074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.885086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.885336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.885479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.885491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.885604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.886006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.886024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.886171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.886387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.886398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.886532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.886755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.886770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.886904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.887031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.887042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.887256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.887465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.887485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.887635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.887833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.887849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.888054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.888182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.888198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.888373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.888609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.888625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.888830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.888929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.888945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.889141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.889348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.889364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.889514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.889772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.889788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.889930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.890082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.890100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.890300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.890511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.890527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.890660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.890864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.890880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.891036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.891253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.891269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.891459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.891596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.891612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.891743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.891983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.045 [2024-10-07 07:49:21.892000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.045 qpair failed and we were unable to recover it. 00:30:18.045 [2024-10-07 07:49:21.892225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.892357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.892372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.892584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.892753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.892769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.892895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.893103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.893119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.893297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.893575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.893591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.893729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.893865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.893881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.894006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.894150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.894168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.894374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.894496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.894511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.894779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.894977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.894992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.895188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.895324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.895340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.895537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.895729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.895745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.895865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.896122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.896139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.896281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.896507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.896526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.896626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.896777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.896792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.897052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.897250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.897266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.897419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.897564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.897580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.897773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.898043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.898067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.898196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.898405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.898421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.898538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.898742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.898757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.898965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.899099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.899116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.899208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.899309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.899324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.899518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.899725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.899741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.900009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.900200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.900216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.046 qpair failed and we were unable to recover it. 00:30:18.046 [2024-10-07 07:49:21.900417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.900539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.046 [2024-10-07 07:49:21.900555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.900678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.900803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.900819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.901015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.901274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.901291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.901490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.901683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.901699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.901955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.902154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.902170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.902384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.902640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.902656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.902913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.903105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.903121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.903245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.903469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.903485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.903657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.903809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.903825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.903982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.904197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.904214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.904355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.904545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.904561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.904757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.904881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.904896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.905100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.905224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.905239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.905329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.905479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.905494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.905696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.905843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.905859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.905990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.906260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.906280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.906489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.906718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.906734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.906863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.906996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.907011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.907270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.907403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.907419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.907618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.907805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.907819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.908025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.908166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.908187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.047 qpair failed and we were unable to recover it. 00:30:18.047 [2024-10-07 07:49:21.908399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.047 [2024-10-07 07:49:21.908521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.908537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.908671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.908859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.908875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.909008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.909177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.909193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.909317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.909503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.909518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.909659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.909798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.909816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.909939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.910078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.910095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.910354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.910613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.910629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.910755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.910952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.910969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.911121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.911337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.911353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.911497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.911774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.911790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.911873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.912005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.912021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.912236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.912336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.912352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.912552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.912742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.912757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.912981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.913191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.913208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.913366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.913574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.913596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.913727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.913964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.913981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.914195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.914392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.914409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.914611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.914747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.914763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.914959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.915119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.915137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.048 qpair failed and we were unable to recover it. 00:30:18.048 [2024-10-07 07:49:21.915311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.048 [2024-10-07 07:49:21.915456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.915473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.915583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.915773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.915792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.915899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.916041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.916064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.916356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.916566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.916584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.916735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.916956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.916974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.917105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.917365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.917382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.917582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.917784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.917801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.917939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.918162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.918180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.918319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.918520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.918538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.918773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.918875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.918891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.919095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.919308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.919326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.919453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.919598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.919614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.919758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.919948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.919966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.920159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.920358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.920376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.920524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.920733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.920751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.920874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.921079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.921096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.921276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.921426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.921443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.921653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.921913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.921931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.922081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.922222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.922239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.922363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.922575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.922591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.922858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.923048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.923071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.049 qpair failed and we were unable to recover it. 00:30:18.049 [2024-10-07 07:49:21.923360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.923670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.049 [2024-10-07 07:49:21.923685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.923886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.924167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.924184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.924380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.924515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.924535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.924682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.924945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.924961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.925098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.925300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.925315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.925493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.925628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.925645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.925927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.926082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.926099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.926241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.926391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.926407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.926596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.926751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.926766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.926972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.927171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.927188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.927318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.927540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.927556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.927694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.927888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.927904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.928189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.928378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.928394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.928678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.928935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.928952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.929152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.929275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.929290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.929551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.929756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.929772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.929925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.930120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.930137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.930298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.930469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.930484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.930743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.930857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.930872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.931086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.931228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.931244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.931456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.931646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.931661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.931819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.932082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.932099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.050 qpair failed and we were unable to recover it. 00:30:18.050 [2024-10-07 07:49:21.932382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.050 [2024-10-07 07:49:21.932506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.932522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.932785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.932918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.932933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.933071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.933200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.933216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.933352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.933540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.933557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.933838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.934024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.934040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.934269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.934418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.934434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.934566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.934794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.934810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.935022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.935225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.935241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.935366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.935558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.935574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.935777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.935985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.936001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.936207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.936360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.936376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.936594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.936794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.936809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.936994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.937186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.937203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.937354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.937564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.937580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.937737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.937924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.937940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.938189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.938396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.938413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.938650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.938848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.938864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.939051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.939268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.939284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.939474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.939624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.939639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.939942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.940142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.940158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.940370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.940570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.940584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.051 qpair failed and we were unable to recover it. 00:30:18.051 [2024-10-07 07:49:21.940813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.940946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.051 [2024-10-07 07:49:21.940961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.941103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.941307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.941324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.941429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.941548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.941563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.941751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.941902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.941917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.942064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.942198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.942214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.942473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.942697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.942713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.942942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.943150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.943165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.943407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.943628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.943643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.943791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.943925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.943940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.944167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.944442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.944457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.944572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.944825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.944840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.945034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.945168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.945183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.945491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.945633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.945648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.945817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.946025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.946040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.946331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.946481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.946496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.946703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.946841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.946856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.947051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.947255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.947270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.947576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.947728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.947743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.947981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.948124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.948142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.948359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.948544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.948559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.948661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.948879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.948893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.949029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.949193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.949210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.052 [2024-10-07 07:49:21.949495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.949653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.052 [2024-10-07 07:49:21.949669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.052 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.949864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.949896] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:18.053 [2024-10-07 07:49:21.950000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.950007] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.053 [2024-10-07 07:49:21.950016] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.053 [2024-10-07 07:49:21.950016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.053 [2024-10-07 07:49:21.950023] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.950242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.950312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:18.053 [2024-10-07 07:49:21.950401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:18.053 [2024-10-07 07:49:21.950511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:18.053 [2024-10-07 07:49:21.950533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.950548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.950512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:18.053 [2024-10-07 07:49:21.950705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.950958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.950974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.951225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.951418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.951433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.951635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.951832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.951847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.951986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.952178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.952194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.952307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.952463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.952478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.952727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.952969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.952988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.953209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.953343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.953360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.953576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.953694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.953709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.953895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.954181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.954199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.954409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.954648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.954664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.954941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.955116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.955133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.955412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.955563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.955578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.955722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.955802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.955817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.956023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.956233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.956249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.956458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.956664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.956680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.956885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.957089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.957106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.957311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.957505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.957521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.053 qpair failed and we were unable to recover it. 00:30:18.053 [2024-10-07 07:49:21.957667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.053 [2024-10-07 07:49:21.957854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.957870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.958077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.958320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.958336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.958534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.958739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.958755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.959055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.959264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.959279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.959493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.959634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.959650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.959853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.960053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.960073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.960283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.960426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.960441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.960699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.960899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.960915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.961110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.961334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.961353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.961563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.961696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.961711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.961859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.962098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.962114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.962319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.962552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.962568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.962782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.963015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.963030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.963184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.963329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.963344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.963496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.963700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.963716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.963941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.964130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.964147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.964287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.964570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.964586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.964787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.964995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.965011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.054 qpair failed and we were unable to recover it. 00:30:18.054 [2024-10-07 07:49:21.965153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.054 [2024-10-07 07:49:21.965357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.965374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.965615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.965748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.965763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.965958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.966088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.966105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.966242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.966371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.966387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.966480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.966709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.966724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.966937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.967142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.967159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.967259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.967489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.967505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.967745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.967947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.967965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.968121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.968324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.968341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.968496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.968720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.968737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.968871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.969023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.969039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.969334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.969535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.969553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.969714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.969935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.969952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.970150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.970364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.970380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.970612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.970826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.970841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.970989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.971124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.971140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.971344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.971588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.971605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.971828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.971961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.971976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.972234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.972391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.972408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.972620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.972828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.972844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.973114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.973381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.973396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d7ea0 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.973570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.973792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.973807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.055 qpair failed and we were unable to recover it. 00:30:18.055 [2024-10-07 07:49:21.973950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.055 [2024-10-07 07:49:21.974093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.056 [2024-10-07 07:49:21.974113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.056 qpair failed and we were unable to recover it. 00:30:18.056 [2024-10-07 07:49:21.974312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.056 [2024-10-07 07:49:21.974447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.056 [2024-10-07 07:49:21.974462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.056 qpair failed and we were unable to recover it. 00:30:18.056 [2024-10-07 07:49:21.974675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.056 [2024-10-07 07:49:21.974805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.056 [2024-10-07 07:49:21.974821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.056 qpair failed and we were unable to recover it. 00:30:18.056 [2024-10-07 07:49:21.974996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.056 [2024-10-07 07:49:21.975148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.056 [2024-10-07 07:49:21.975164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.056 qpair failed and we were unable to recover it. 00:30:18.056 [2024-10-07 07:49:21.975355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.975574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.975589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.975772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.975872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.975887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.976096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.976295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.976310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.976501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.976634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.976649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.976837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.977037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.977052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.977187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.977470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.977485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.977719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.977926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.977941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.978214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.978319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.978334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.978541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.978687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.978701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.978843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.979102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.979117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.979362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.979490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.979507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.979655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.979867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.979882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.980072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.980207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.980236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.980405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.980492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.980507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.980730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.980923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.980938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.981054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.981207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.981222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.981436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.981534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.981549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.339 qpair failed and we were unable to recover it. 00:30:18.339 [2024-10-07 07:49:21.981710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.981908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.339 [2024-10-07 07:49:21.981923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.982075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.982275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.982290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.982496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.982800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.982816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.983015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.983272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.983297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.983446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.983635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.983650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.983796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.984048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.984071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.984198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.984476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.984491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb8000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.984623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.984761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.984772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.985000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.985111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.985122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.985263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.985436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.985447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.985669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.985870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.985881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.986015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.986147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.986157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.986341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.986522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.986532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.986731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.986918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.986928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.987118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.987302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.987312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.987494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.987697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.987706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.987880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.988073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.988084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.988215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.988353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.988363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.988576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.988828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.988840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.989022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.989139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.989149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.989363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.989568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.989578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.989760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.989949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.989959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.990154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.990348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.990360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.990608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.990802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.990812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.991044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.991183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.991194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.991309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.991403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.991413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.991612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.991795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.991806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.991918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.992095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.992107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.992361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.992506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.992520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.340 qpair failed and we were unable to recover it. 00:30:18.340 [2024-10-07 07:49:21.992709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.340 [2024-10-07 07:49:21.992905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.992916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.993068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.993210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.993221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.993430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.993676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.993688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.993901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.994039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.994050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.994268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.994363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.994374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.994641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.994754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.994765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.994970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.995101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.995112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.995227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.995410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.995422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.995566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.995757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.995768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.995958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.996201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.996216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.996353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.996541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.996550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.996744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.996956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.996968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.997161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.997292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.997304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.997429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.997606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.997616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.997882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.998079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.998090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.998309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.998447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.998458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.998591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.998790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.998800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.998980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.999239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.999250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.999388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.999583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.999593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:21.999844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.999961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:21.999974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:22.000164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.000355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.000365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:22.000546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.000685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.000696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:22.000852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.001097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.001108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:22.001357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.001527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.001537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:22.001803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.001992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.002002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:22.002181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.002377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.002388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:22.002637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.002840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.002850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:22.002981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.003196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.003206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.341 [2024-10-07 07:49:22.003333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.003539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.341 [2024-10-07 07:49:22.003550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.341 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.003691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.003966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.003977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.004173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.004353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.004363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.004556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.004805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.004816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.004944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.005210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.005223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.005418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.005687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.005697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.005833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.005991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.006000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.006228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.006417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.006426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.006558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.006777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.006787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.006967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.007147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.007157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.007352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.007536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.007545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.007749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.008006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.008016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.008298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.008578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.008587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.008730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.008926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.008935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.009075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.009285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.009294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.009541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.009731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.009741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.009938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.010047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.010057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.010319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.010517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.010526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.010788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.011010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.011020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.011282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.011526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.011536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.011664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.011858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.011868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.012061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.012310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.012320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.012460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.012723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.012733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.012872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.012997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.013007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.013254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.013386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.013396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.013605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.013794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.013804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.013933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.014205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.014216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.014414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.014679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.014689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.014936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.015064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.015074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.342 qpair failed and we were unable to recover it. 00:30:18.342 [2024-10-07 07:49:22.015270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.015392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.342 [2024-10-07 07:49:22.015403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.015596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.015841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.015851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.015980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.016231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.016241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.016382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.016507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.016517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.016724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.016922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.016931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.017133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.017323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.017333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.017580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.017702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.017712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.017906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.018030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.018040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.018228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.018426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.018435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.018617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.018793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.018802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.019003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.019142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.019152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.019345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.019567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.019576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.019771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.019950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.019960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.020239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.020373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.020384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.020629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.020823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.020833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.021017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.021227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.021237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.021431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.021576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.021586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.021717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.021830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.021840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.022019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.022195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.022206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.022399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.022595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.022605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.022796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.022973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.022982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.023176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.023297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.023306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.023583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.023709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.023718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.023913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.024224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.024235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.343 qpair failed and we were unable to recover it. 00:30:18.343 [2024-10-07 07:49:22.024504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.024694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.343 [2024-10-07 07:49:22.024704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.024904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.025092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.025103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.025228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.025416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.025426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.025701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.025891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.025901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.026120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.026256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.026266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.026468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.026710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.026720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.026941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.027190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.027200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.027463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.027569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.027578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.027771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.027885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.027895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.028020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.028217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.028227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.028427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.028662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.028671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.028804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.029005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.029015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.029261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.029353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.029363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.029476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.029671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.029681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.029799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.029977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.029987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.030205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.030400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.030409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.030528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.030704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.030714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.030838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.031028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.031038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.031307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.031423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.031432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.031526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.031792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.031802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.032066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.032257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.032267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.032446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.032590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.032600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.032814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.032922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.032932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.033070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.033208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.033218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.033360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.033539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.033548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.033728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.033836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.033846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.034092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.034233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.034242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.034454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.034677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.034687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.034963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.035155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.035165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.344 qpair failed and we were unable to recover it. 00:30:18.344 [2024-10-07 07:49:22.035382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.344 [2024-10-07 07:49:22.035524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.035534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.035665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.035792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.035802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.035992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.036229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.036239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.036452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.036650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.036660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.036765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.036869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.036880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.037076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.037281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.037290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.037558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.037680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.037690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.037872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.038068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.038078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.038322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.038573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.038583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.038780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.038901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.038910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.039037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.039293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.039304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.039492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.039734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.039744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.039872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.040112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.040122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.040214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.040351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.040361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.040625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.040801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.040811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.041064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.041265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.041275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.041487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.041732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.041742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.041949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.042141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.042152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.042448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.042632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.042641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.042829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.042953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.042963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.043235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.043366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.043376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.043573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.043751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.043760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.043956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.044223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.044233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.044442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.044639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.044648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.044781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.044960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.044971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.045171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.045362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.045372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.045566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.045756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.045766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.045887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.046072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.046083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.046199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.046326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.046335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.345 qpair failed and we were unable to recover it. 00:30:18.345 [2024-10-07 07:49:22.046517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.345 [2024-10-07 07:49:22.046710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.046720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.046848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.047011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.047021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.047286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.047416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.047426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.047645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.047769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.047779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.047906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.048031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.048041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.048189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.048333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.048343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.048590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.048809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.048819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.049028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.049227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.049238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.049436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.049631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.049641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.049827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.050032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.050042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.050171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.050349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.050359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.050554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.050753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.050765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.050944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.051139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.051149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.051363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.051536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.051546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.051671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.051854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.051864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.052137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.052349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.052359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.052499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.052742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.052752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.052865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.052995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.053005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.053197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.053393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.053403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.053517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.053624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.053634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.053897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.054009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.054020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.054214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.054394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.054406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.054540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.054742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.054752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.055029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.055208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.055218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.055409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.055522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.055532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.055713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.055833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.055843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.056024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.056212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.056222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.056342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.056453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.056463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.056710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.056956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.346 [2024-10-07 07:49:22.056966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.346 qpair failed and we were unable to recover it. 00:30:18.346 [2024-10-07 07:49:22.057184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.057395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.057405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.057599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.057788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.057798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.057996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.058118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.058130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.058369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.058483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.058493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.058642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.058756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.058766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.058950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.059138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.059149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.059307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.059489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.059499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.059744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.059934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.059944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.060129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.060267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.060277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.060499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.060750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.060760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.060892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.061090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.061101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.061376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.061557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.061567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.061749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.061886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.061897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.062032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.062221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.062232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.062377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.062568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.062577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.062770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.062949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.062958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.063153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.063367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.063376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.063645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.063823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.063833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.063947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.064087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.064097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.064276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.064453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.064463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.064646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.064766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.064775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.064902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.065173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.065183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.065304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.065460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.065470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.065623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.065751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.065761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.065952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.066148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.066158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.066287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.066492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.066502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.066626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.066771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.066781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.066973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.067098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.067108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.067382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.067565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.347 [2024-10-07 07:49:22.067574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.347 qpair failed and we were unable to recover it. 00:30:18.347 [2024-10-07 07:49:22.067774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.068056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.068069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.068266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.068393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.068403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.068556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.068697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.068708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.068889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.069072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.069083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.069279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.069458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.069469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.069577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.069693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.069703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.069911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.070053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.070066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.070246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.070445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.070456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.070652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.070794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.070805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.070937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.071134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.071145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.071345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.071483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.071493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.071689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.071892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.071903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.072041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.072230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.072240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.072432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.072653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.072663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.072796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.072989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.073000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.073177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.073385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.073395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.073596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.073816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.073826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.073954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.074227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.074238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.074436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.074684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.074694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.074828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.074936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.074946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.075077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.075281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.075291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.075556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.075679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.075689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.075883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.076115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.076124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.076322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.076444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.076453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.076580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.076826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.076836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.348 qpair failed and we were unable to recover it. 00:30:18.348 [2024-10-07 07:49:22.077016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.077140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.348 [2024-10-07 07:49:22.077151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.077334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.077513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.077523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.077666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.077812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.077822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.078025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.078226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.078236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.078373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.078506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.078516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.078655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.078850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.078860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.078992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.079075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.079085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.079191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.079383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.079393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.079506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.079615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.079624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.079873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.080068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.080078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.080262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.080455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.080465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.080586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.080723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.080733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.080924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.081109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.081119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.081319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.081427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.081437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.081632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.081744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.081754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.081875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.082054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.082067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.082259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.082447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.082457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.082644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.082781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.082791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.082980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.083239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.083249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.083433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.083725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.083735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.083851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.084047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.084057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.084210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.084378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.084388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.084513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.084642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.084652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.084779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.085037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.085047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.085228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.085513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.085523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.085718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.085956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.085966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.086212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.086341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.086351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.086565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.086746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.086756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.087000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.087124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.087135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.349 qpair failed and we were unable to recover it. 00:30:18.349 [2024-10-07 07:49:22.087257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.087437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.349 [2024-10-07 07:49:22.087447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.087567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.087759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.087769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.087899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.088087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.088097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.088285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.088398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.088408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.088641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.088828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.088838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.089035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.089173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.089183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.089382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.089572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.089581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.089717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.089907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.089917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.090098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.090322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.090332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.090523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.090700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.090709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.090869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.091065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.091075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.091218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.091348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.091358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.091550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.091745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.091755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.092000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.092119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.092130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.092241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.092370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.092379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.092575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.092762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.092772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.092984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.093110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.093120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.093386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.093526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.093536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.093734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.094006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.094015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.094305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.094516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.094526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.094679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.094893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.094903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.095171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.095311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.095320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.095460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.095751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.095760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.095874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.096063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.096073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.096293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.096435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.096445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.096627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.096915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.096924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.097114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.097319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.097329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.097520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.097803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.097813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.098093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.098351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.098361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.350 qpair failed and we were unable to recover it. 00:30:18.350 [2024-10-07 07:49:22.098625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.350 [2024-10-07 07:49:22.098755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.098765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.098984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.099161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.099171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.099313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.099437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.099447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.099642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.099830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.099839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.100115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.100262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.100272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.100413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.100499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.100509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.100597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.100818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.100828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.100939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.101227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.101238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.101489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.101733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.101743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.101992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.102188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.102199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.102472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.102668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.102678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.102950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.103147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.103158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.103255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.103508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.103518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.103593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.103821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.103831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.103943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.104240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.104250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.104473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.104613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.104623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.104823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.105000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.105010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.105210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.105404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.105414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.105594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.105773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.105783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.105979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.106235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.106246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.106461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.106650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.106660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.106783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.106924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.106933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.107147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.107326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.107336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.107476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.107604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.107614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.107823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.108008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.108018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.108198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.108378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.108388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.108643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.108828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.108838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.109021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.109159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.109169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.109305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.109550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.109560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.351 qpair failed and we were unable to recover it. 00:30:18.351 [2024-10-07 07:49:22.109805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.351 [2024-10-07 07:49:22.109930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.109940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.110135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.110256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.110266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.110569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.110838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.110850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.110973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.111083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.111094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.111306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.111482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.111492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.111689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.111867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.111877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.112029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.112203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.112213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.112410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.112598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.112608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.112867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.113077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.113087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.113312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.113513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.113523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.113671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.113912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.113922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.114033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.114217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.114228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.114422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.114704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.114716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.114846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.114959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.114969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.115158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.115264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.115274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.115494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.115671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.115682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.115864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.116061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.116072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.116285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.116556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.116566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.116768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.117026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.117036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.117157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.117234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.117244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.117431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.117608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.117618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.117863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.118132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.118142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.118284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.118404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.118417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.118613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.118751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.118761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.118942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.119186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.119196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.119380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.119574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.119584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.352 qpair failed and we were unable to recover it. 00:30:18.352 [2024-10-07 07:49:22.119709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.119901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.352 [2024-10-07 07:49:22.119911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.120071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.120203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.120214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.120412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.120604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.120614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.120867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.121055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.121068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.121193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.121406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.121416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.121605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.121730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.121740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.121915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.122131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.122144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.122282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.122406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.122416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.122624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.122752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.122762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.123023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.123245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.123255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.123524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.123686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.123695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.123837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.124067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.124077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.124267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.124485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.124494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.124623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.124816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.124826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.124968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.125092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.125103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.125294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.125479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.125489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.125679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.125922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.125932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.126126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.126386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.126396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.126664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.126866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.126875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.127055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.127222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.127232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.127489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.127610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.127620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.127825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.128005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.128014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.128284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.128419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.128429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.128567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.128781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.128791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.128935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.129139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.129149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.129297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.129493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.129502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.129709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.129901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.129910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.130112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.130357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.130367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.353 [2024-10-07 07:49:22.130544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.130728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.353 [2024-10-07 07:49:22.130739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.353 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.130933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.131109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.131119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.131300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.131494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.131503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.131699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.131968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.131977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.132125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.132254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.132264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.132382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.132510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.132519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.132702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.132922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.132932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.133145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.133364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.133374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.133576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.133761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.133771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.133958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.134181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.134191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.134411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.134529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.134539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.134737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.134949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.134959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.135209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.135474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.135484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.135724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.136029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.136038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.136177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.136358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.136368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.136505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.136622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.136631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.136781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.136894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.136904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.137103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.137296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.137305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.137516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.137643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.137652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.137845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.138040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.138050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.138245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.138374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.138384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.138582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.138853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.138863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.139080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.139281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.139291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.139485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.139678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.139688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.139876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.140006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.140015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.140200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.140443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.140453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.140647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.140846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.140855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.141033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.141146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.141157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.141335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.141466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.141475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.354 [2024-10-07 07:49:22.141668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.141914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.354 [2024-10-07 07:49:22.141924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.354 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.142191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.142392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.142402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.142594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.142737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.142747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.142940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.143066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.143077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.143345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.143534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.143544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.143733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.143978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.143988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.144215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.144393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.144404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.144586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.144772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.144782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.145027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.145138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.145149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.145341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.145616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.145625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.145763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.145884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.145893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.146085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.146218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.146228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.146423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.146547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.146557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.146683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.146807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.146818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.147009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.147224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.147234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.147414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.147619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.147629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.147774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.147964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.147974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.148251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.148445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.148454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.148650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.148838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.148848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.149033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.149301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.149311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.149470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.149665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.149675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.149855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.150099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.150109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.150306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.150482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.150492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.150630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.150884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.150894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.151016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.151279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.151289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.151475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.151744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.151754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.151876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.152052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.152066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.152190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.152321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.152331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.152519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.152630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.152639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.152838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.152948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.355 [2024-10-07 07:49:22.152958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.355 qpair failed and we were unable to recover it. 00:30:18.355 [2024-10-07 07:49:22.153147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.153362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.153371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.153595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.153737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.153747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.153869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.154159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.154170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.154387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.154512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.154522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.154703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.154947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.154957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.155252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.155432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.155442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.155631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.155851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.155861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.156050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.156241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.156251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.156548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.156728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.156738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.156938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.157064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.157074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.157216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.157361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.157371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.157587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.157723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.157733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.157933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.158144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.158155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.158304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.158485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.158495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.158666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.158789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.158799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.159011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.159172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.159182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.159369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.159560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.159569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.159815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.160101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.160111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.160248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.160481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.160491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.160617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.160807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.160816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.161016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.161173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.161183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.161365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.161544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.161553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.161745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.161941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.161951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.162144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.162267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.162276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.162461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.162587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.162597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.162840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.162980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.162990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.163113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.163245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.163255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.163386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.163534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.163544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.356 qpair failed and we were unable to recover it. 00:30:18.356 [2024-10-07 07:49:22.163656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.356 [2024-10-07 07:49:22.163898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.163908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.164050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.164325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.164335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.164526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.164739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.164749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.164865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.165022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.165031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.165166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.165450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.165460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.165657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.165798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.165808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.166078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.166322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.166331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.166434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.166621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.166631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.166749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.166943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.166953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.167133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.167323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.167333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.167601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.167791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.167801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.168056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.168262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.168272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.168385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.168651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.168663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.168858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.169099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.169109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.169304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.169503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.169513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.169710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.169956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.169966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.170154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.170276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.170286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.170481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.170601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.170611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.170806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.170915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.170925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.171105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.171296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.171306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.171444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.171556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.171566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.171692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.171878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.171887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.172021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.172152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.172164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.172362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.172627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.172637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.172886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.173085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.173095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.173222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.173334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.357 [2024-10-07 07:49:22.173344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.357 qpair failed and we were unable to recover it. 00:30:18.357 [2024-10-07 07:49:22.173480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.173767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.173777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.173955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.174078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.174089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.174226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.174417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.174427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.174642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.174852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.174861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.175062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.175353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.175363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.175502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.175749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.175759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.175937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.176050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.176067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.176262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.176436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.176446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.176638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.176816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.176826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.176950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.177128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.177138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.177330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.177505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.177515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.177726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.177905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.177915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.178105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.178365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.178375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.178576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.178857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.178866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.179042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.179237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.179247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.179373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.179557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.179567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.179705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.179919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.179931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.180042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.180323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.180333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.180608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.180752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.180761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.180955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.181146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.181156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.181264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.181517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.181527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.181714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.181904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.181914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.182090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.182286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.182296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.182527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.182666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.182676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.182875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.183072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.183083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.183290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.183374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.183384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.183626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.183821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.183830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.184019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.184209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.184220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.184350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.184618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.358 [2024-10-07 07:49:22.184628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.358 qpair failed and we were unable to recover it. 00:30:18.358 [2024-10-07 07:49:22.184809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.184999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.185009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.185141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.185331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.185341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.185550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.185728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.185738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.186032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.186232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.186242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.186459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.186648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.186658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.186871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.187050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.187069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.187208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.187417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.187427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.187603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.187728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.187738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.187941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.188082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.188092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.188307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.188422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.188432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.188561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.188826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.188835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.189044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.189228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.189238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.189377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.189555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.189565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.189761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.189962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.189972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.190109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.190298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.190308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.190420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.190540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.190550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.190681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.190861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.190871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.191037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.191241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.191251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.191431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.191574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.191584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.191708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.191969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.191979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.192161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.192336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.192346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.192560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.192756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.192765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.192957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.193081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.193091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.193279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.193420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.193430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.193695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.193828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.193838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.193954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.194132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.194143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.194389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.194478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.194488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.194646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.194841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.194851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.195121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.195241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.359 [2024-10-07 07:49:22.195251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.359 qpair failed and we were unable to recover it. 00:30:18.359 [2024-10-07 07:49:22.195445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.195640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.195650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.195775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.195968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.195977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.196167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.196295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.196306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.196484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.196617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.196626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.196817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.196942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.196952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.197133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.197262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.197272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.197485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.197615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.197625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.197810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.198008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.198018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.198098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.198210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.198220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.198406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.198520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.198530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.198715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.198902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.198912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.199113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.199377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.199387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.199508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.199783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.199793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.200011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.200193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.200204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.200347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.200484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.200494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.200672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.200830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.200840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.201037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.201174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.201184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.201387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.201519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.201529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.201723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.201856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.201866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.202007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.202187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.202197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.202320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.202536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.202545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.202837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.202963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.202973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.203190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.203402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.203412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.203617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.203801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.203811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.203922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.204051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.204064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.204180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.204446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.204456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.204584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.204826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.204836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.205031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.205243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.205253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.205373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.205473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.360 [2024-10-07 07:49:22.205483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.360 qpair failed and we were unable to recover it. 00:30:18.360 [2024-10-07 07:49:22.205698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.205944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.205953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.206083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.206273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.206283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.206421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.206609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.206619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.206812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.207000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.207010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.207203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.207453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.207463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.207641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.207831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.207841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.208048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.208242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.208252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.208431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.208727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.208737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.208872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.208988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.208998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.209127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.209328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.209338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.209521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.209723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.209732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.209931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.210112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.210122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.210376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.210573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.210583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.210707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.210884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.210894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.211156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.211333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.211343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.211531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.211735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.211745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.211877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.212011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.212021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.212199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.212381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.212391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.212577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.212837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.212846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.212925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.213068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.213078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.213295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.213486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.213496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.213674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.213867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.213877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.214015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.214159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.214170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.214389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.214574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.214584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.214775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.214902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.214912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.215182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.215395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.215405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.361 qpair failed and we were unable to recover it. 00:30:18.361 [2024-10-07 07:49:22.215548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.215739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.361 [2024-10-07 07:49:22.215748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.215949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.216137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.216148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.216271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.216458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.216468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.216689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.216883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.216893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.217076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.217263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.217273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.217520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.217804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.217814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.218011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.218139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.218149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.218319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.218454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.218464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.218674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.218863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.218873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.219119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.219334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.219344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.219547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.219687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.219697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.219881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.219993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.220003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.220198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.220377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.220387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.220516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.220695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.220705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.220914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.221005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.221015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.221147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.221359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.221369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.221506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.221698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.221709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.221790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.221916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.221927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.222129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.222303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.222313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.222504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.222686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.222696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.222839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.222979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.222988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.223236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.223450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.223460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.223650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.223763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.223773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.223953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.224147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.224157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.224372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.224497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.224507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.224701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.224992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.225002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.225225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.225370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.225380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.225564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.225806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.225816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.362 qpair failed and we were unable to recover it. 00:30:18.362 [2024-10-07 07:49:22.225957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.226149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.362 [2024-10-07 07:49:22.226159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.226430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.226680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.226690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.226822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.227012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.227022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.227290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.227413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.227423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.227689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.227810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.227819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.227945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.228198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.228209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.228345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.228524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.228536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.228732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.228862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.228872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.229016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.229231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.229241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.229369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.229631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.229641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.229761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.229887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.229896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.230180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.230340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.230349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.230659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.230851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.230861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.231055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.231274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.231285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.231435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.231567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.231576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.231800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.231919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.231929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.232119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.232365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.232377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.232567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.232773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.232783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.232921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.233103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.233113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.233304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.233429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.233439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.233629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.233821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.233831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.234101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.234224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.234234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.234482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.234611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.234622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.234841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.235029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.235039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.235176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.235364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.235374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.235499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.235658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.235668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.235859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.236048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.236069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.236248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.236422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.236432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.236656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.236778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.236788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.363 qpair failed and we were unable to recover it. 00:30:18.363 [2024-10-07 07:49:22.236910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.237100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.363 [2024-10-07 07:49:22.237110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.237391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.237578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.237588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.237783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.237919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.237929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.238051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.238177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.238187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.238263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.238539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.238548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.238734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.238884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.238894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.239081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.239369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.239379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.239646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.239771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.239783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.239973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.240100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.240110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.240292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.240495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.240505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.240646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.240769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.240779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.241028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.241213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.241223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.241312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.241557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.241567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.241759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.241959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.241969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.242161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.242354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.242364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.242543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.242801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.242810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.243007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.243253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.243263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.243535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.243670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.243679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.243875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.244066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.244076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.244211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.244420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.244430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.244542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.244739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.244749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.244878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.245083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.245093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.245359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.245555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.245565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.245760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.245881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.245890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.246032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.246231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.246241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.246457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.246641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.246651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.246845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.246971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.246981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.247202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.247459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.247468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.247586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.247843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.247853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.364 qpair failed and we were unable to recover it. 00:30:18.364 [2024-10-07 07:49:22.248052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.248250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.364 [2024-10-07 07:49:22.248260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.248450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.248625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.248635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.248762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.248892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.248902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.249079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.249206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.249215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.249465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.249579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.249589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.249718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.249981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.249990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.250242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.250432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.250441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.250549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.250817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.250827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.251047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.251181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.251191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.251386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.251596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.251606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.251872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.252072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.252082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.252278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.252427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.252437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.252568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.252759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.252769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.253037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.253162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.253172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.253301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.253492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.253502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.253687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.253963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.253974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.254169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.254357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.254368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.254514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.254702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.254711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.254927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.255051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.255069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.255209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.255386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.255396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.255533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.255780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.255789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.255928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.256171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.256181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.256380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.256506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.256516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.256707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.256829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.256840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.257028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.257134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.257144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.257414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.257599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.257609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.257758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.258009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.258018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.258139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.258323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.258333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.258470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.258684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.258694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.365 qpair failed and we were unable to recover it. 00:30:18.365 [2024-10-07 07:49:22.258827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.258959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.365 [2024-10-07 07:49:22.258969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.259186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.259466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.259476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.259729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.259857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.259867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.260048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.260252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.260262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.260459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.260594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.260604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.260799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.260931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.260940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.261068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.261213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.261223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.261424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.261536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.261546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.261797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.262009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.262019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.262150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.262345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.262355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.262539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.262758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.262768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.262995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.263091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.263101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.263230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.263475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.263485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.263734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.263935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.263946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.264123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.264318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.264328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.264573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.264760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.264770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.264891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.265070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.265081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.265302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.265428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.265438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.265566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.265759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.265769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.265902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.266105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.266115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.266308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.266420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.266430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.266627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.266867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.266877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.266995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.267140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.267150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.267331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.267541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.267551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.267796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.268037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.268047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.268158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.268364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.268374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.366 qpair failed and we were unable to recover it. 00:30:18.366 [2024-10-07 07:49:22.268561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.366 [2024-10-07 07:49:22.268686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.268697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.268878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.269055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.269069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.269257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.269452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.269462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.269701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.269823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.269832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.269992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.270252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.270263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.270402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.270548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.270558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.270747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.270865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.270875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.271121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.271312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.271322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.271513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.271717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.271727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.271910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.272101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.272111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.272252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.272429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.272438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.272568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.272691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.272701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.272831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.273094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.273104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.273245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.273420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.273430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.273548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.273746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.273757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.274021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.274134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.274145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.274325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.274523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.274532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.274658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.274790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.274800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.275043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.275321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.275331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.275588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.275698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.275707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.275846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.276112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.276122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.276315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.276451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.276460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.276582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.276780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.276790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.276904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.277028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.277038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.277152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.277278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.277288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.277483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.277731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.277741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.277937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.278067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.278077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.278188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.278332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.278342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.278546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.278668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.278678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.367 qpair failed and we were unable to recover it. 00:30:18.367 [2024-10-07 07:49:22.278915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.367 [2024-10-07 07:49:22.279102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.279112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.279296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.279489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.279498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.279744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.279940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.279950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.280081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.280231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.280241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.280436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.280677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.280687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.280815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.281063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.281073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.281204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.281391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.281401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.281526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.281637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.281647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.281788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.281912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.281923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.282052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.282180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.282191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.282402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.282592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.282602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.282748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.282868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.282878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.283139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.283299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.283309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.283438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.283699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.283709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.283856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.284031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.284040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.284227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.284438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.284448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.284670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.284786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.284796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.284979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.285170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.285180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.285380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.285666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.368 [2024-10-07 07:49:22.285676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-10-07 07:49:22.285852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.286096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.286107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.286321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.286497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.286507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.286622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.286839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.286849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.287062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.287215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.287225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.287338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.287526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.287536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.287749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.288021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.288031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.288212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.288391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.288403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.288551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.288739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.288749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.288961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.289152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.289162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.289373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.289575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.289585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.289801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.289936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.289946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.290124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.290265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.290274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.290466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.290678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.290689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.290837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.290961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.290971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.291232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.291484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.291494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.291689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.291836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.291846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.292096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.292350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.292363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.652 qpair failed and we were unable to recover it. 00:30:18.652 [2024-10-07 07:49:22.292507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.652 [2024-10-07 07:49:22.292703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.292713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.292890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.293067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.293077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.293273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.293498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.293508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.293687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.293823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.293833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.294027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.294206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.294217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.294351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.294594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.294603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.294744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.294937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.294947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.295076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.295319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.295329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.295480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.295731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.295741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.295987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.296162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.296174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.296394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.296515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.296525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.296639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.296907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.296916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.297111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.297252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.297262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.297384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.297565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.297575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.297820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.298040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.298050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.298231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.298367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.298377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.298567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.298694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.298703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.298910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.299179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.299189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.299403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.299598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.299608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.299732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.299914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.299928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.300125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.300268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.300277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.300456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.300590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.300600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.300795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.301067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.301078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.301218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.301420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.301429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.301744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.301830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.301840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.302133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.302423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.302433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.302578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.302704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.302714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.302907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.303089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.303099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.303233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.303428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.653 [2024-10-07 07:49:22.303438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.653 qpair failed and we were unable to recover it. 00:30:18.653 [2024-10-07 07:49:22.303639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.303882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.303892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.304178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.304320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.304330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.304473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.304667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.304677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.304804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.305000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.305010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.305197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.305397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.305407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.305603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.305692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.305702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.305912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.306133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.306143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.306323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.306501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.306511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.306697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.306811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.306822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.306946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.307157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.307167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.307447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.307645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.307654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.307850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.308028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.308038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.308169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.308362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.308372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.308550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.308688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.308698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.308933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.309042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.309052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.309194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.309441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.309451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.309642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.309770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.309780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.309983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.310185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.310196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.310337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.310478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.310487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.310694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.310873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.310883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.311157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.311344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.311354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.311485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.311609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.311618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.311909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.312105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.312116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.312323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.312606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.312616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.312728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.312969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.312979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.313105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.313236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.313246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.313509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.313619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.313629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.313822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.313999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.314008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.654 [2024-10-07 07:49:22.314189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.314307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.654 [2024-10-07 07:49:22.314317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.654 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.314508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.314701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.314711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.314890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.315031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.315041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.315225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.315364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.315374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.315553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.315700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.315710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.315911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.316110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.316120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.316250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.316517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.316527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.316706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.316847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.316856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.317047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.317226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.317236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.317466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.317681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.317690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.317893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.318082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.318092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.318270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.318402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.318412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.318602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.318714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.318724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.318926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.319070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.319080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.319212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.319357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.319367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.319596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.319720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.319730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.319910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.320054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.320067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.320340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.320520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.320529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.320803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.320932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.320942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.321123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.321316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.321325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.321519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.321647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.321657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.321861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.322106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.322116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.322312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.322440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.322449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.322571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.322829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.322839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.323087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.323277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.323287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.323463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.323653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.323662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.323881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.324063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.324073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.324257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.324398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.324408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.324546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.324804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.324814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.655 qpair failed and we were unable to recover it. 00:30:18.655 [2024-10-07 07:49:22.325028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.325203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.655 [2024-10-07 07:49:22.325213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.325373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.325591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.325601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.325792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.325988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.325998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.326087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.326208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.326218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.326498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.326675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.326685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.326901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.327097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.327108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.327292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.327469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.327479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.327656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.327845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.327854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.327997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.328195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.328205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.328299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.328425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.328435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.328703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.328820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.328830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.329050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.329254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.329265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.329468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.329662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.329672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.329811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.329937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.329947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.330158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.330339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.330349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.330466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.330726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.330736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.330865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.331073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.331083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.331190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.331315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.331324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.331578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.331775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.331784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.331977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.332112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.332123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.332398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.332589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.332598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.332867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.333056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.333069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.333339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.333535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.333545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.333668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.333932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.333942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.656 [2024-10-07 07:49:22.334212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.334339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.656 [2024-10-07 07:49:22.334348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.656 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.334545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.334667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.334677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.334900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.335074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.335084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.335217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.335414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.335424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.335637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.335727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.335737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.335875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.336086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.336097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.336280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.336398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.336408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.336657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.336872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.336881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.337060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.337257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.337268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.337564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.337809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.337819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.338012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.338238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.338249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.338426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.338566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.338576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.338765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.338900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.338910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.339104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.339227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.339236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.339481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.339751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.339760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.339939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.340150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.340160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.340374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.340546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.340556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.340827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.340953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.340963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.341147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.341364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.341374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.341509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.341708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.341718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.341966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.342177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.342188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.342386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.342641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.342650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.342775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.342894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.342903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.343027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.343197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.343208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.343386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.343591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.343600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.343893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.344082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.344092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.344316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.344516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.344526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.344737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.344840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.344850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.345097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.345373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.345383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.657 qpair failed and we were unable to recover it. 00:30:18.657 [2024-10-07 07:49:22.345659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.657 [2024-10-07 07:49:22.345817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.345827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.346032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.346281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.346293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.346552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.346732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.346742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.346887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.347153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.347163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.347306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.347498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.347508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.347721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.347861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.347870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.347993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.348285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.348295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.348439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.348640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.348650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.348770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.348899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.348909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.349041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.349227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.349237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.349433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.349625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.349635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.349887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.350099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.350112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.350307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.350487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.350496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.350695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.350886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.350896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.351015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.351124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.351135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.351392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.351519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.351529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.351739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.351939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.351949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.352077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.352189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.352199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.352375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.352498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.352508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.352705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.352885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.352895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.353075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.353251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.353261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.353458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.353653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.353665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.353841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.354033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.354043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.354252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.354440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.354450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.354576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.354699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.354709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.354898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.355092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.355103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.355238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.355360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.355370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.355651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.355778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.355788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.355912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.356103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.356113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.658 qpair failed and we were unable to recover it. 00:30:18.658 [2024-10-07 07:49:22.356312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.658 [2024-10-07 07:49:22.356502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.356511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.356689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.356972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.356982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.357190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.357334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.357347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.357555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.357800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.357810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.357954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.358078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.358089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.358245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.358440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.358450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.358651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.358920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.358929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.359105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.359225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.359235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.359383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.359582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.359592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.359844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.359979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.359990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.360171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.360282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.360292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.360505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.360646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.360655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.360904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.361042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.361052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.361208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.361318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.361327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.361507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.361698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.361708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.361895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.362122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.362132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.362445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.362634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.362644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.362888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.363107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.363117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.363255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.363504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.363513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.363713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.363891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.363901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.364031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.364206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.364216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.364420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.364612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.364622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.364902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.365079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.365090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.365215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.365468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.365478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.365679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.365936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.365945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.366224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.366363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.366373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.366517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.366782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.366792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.366933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.367202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.367212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.367326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.367545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.367555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.659 qpair failed and we were unable to recover it. 00:30:18.659 [2024-10-07 07:49:22.367729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.659 [2024-10-07 07:49:22.367864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.367874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.368077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.368254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.368264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.368459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.368670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.368680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.368858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.369077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.369087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.369214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.369458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.369467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.369667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.369940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.369950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.370209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.370349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.370359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.370487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.370599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.370610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.370861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.371037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.371046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.371269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.371430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.371440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.371566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.371707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.371716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.371914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.372108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.372118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.372362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.372573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.372583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.372784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.372975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.372985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.373120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.373312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.373321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.373592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.373720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.373730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.373929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.374145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.374155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.374290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.374481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.374491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.374670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.374780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.374790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.374916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.375044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.375054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.375256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.375344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.375354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.375480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.375659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.375669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.375917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.376209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.376219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.376332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.376511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.376520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.376785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.377065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.377076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.377338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.377530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.377540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.377717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.377892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.377902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.378013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.378208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.378218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.378342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.378521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.378531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.660 qpair failed and we were unable to recover it. 00:30:18.660 [2024-10-07 07:49:22.378654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.660 [2024-10-07 07:49:22.378862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.378872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.379049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.379265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.379275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.379365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.379609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.379619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.379840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.379962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.379971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.380187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.380453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.380462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.380685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.380871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.380880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.381122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.381301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.381311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.381584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.381707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.381716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.381911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.382163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.382173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.382368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.382508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.382517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.382657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.382857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.382867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.383009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.383196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.383206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.383363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.383595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.383605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.383802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.383995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.384004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.384139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.384255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.384265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.384457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.384573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.384583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.384767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.384946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.384956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.385162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.385403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.385413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.385659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.385903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.385913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.386130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.386263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.386272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.386494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.386612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.386622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.386842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.387113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.387123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.387318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.387513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.387523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.661 qpair failed and we were unable to recover it. 00:30:18.661 [2024-10-07 07:49:22.387797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.661 [2024-10-07 07:49:22.388018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.388028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.388244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.388496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.388506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.388634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.388725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.388735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.388932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.389205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.389215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.389483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.389691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.389700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.389886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.390011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.390021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.390164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.390416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.390426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.390603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.390781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.390791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.390917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.391123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.391133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.391314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.391502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.391512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.391690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.391884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.391893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.392029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.392220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.392230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.392434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.392571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.392580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.392723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.392844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.392854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.392991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.393247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.393257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.393450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.393654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.393664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.393785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.394063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.394073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.394274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.394466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.394476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.394664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.394786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.394796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.395012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.395206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.395216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.395404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.395534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.395543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.395804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.395929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.395939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.396248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.396384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.396394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.396621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.396820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.396830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.397004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.397249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.397260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.397392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.397604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.397613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.397790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.397992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.398001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.398182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.398316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.398326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.398470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.398647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.398657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.662 qpair failed and we were unable to recover it. 00:30:18.662 [2024-10-07 07:49:22.398797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.662 [2024-10-07 07:49:22.398935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.398945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.399125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.399247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.399257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.399438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.399581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.399591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.399712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.399891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.399900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.400025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.400228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.400238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.400426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.400622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.400632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.400811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.401024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.401033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.401228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.401421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.401430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.401610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.401746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.401756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.401964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.402152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.402163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.402362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.402471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.402480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.402629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.402831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.402841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.402928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.403102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.403112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.403235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.403446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.403456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.403579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.403843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.403853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.404033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.404277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.404287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.404475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.404613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.404622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.404822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.404956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.404966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.405169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.405264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.405274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.405524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.405795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.405805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.405995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.406092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.406102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.406299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.406523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.406532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.406692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.406830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.406839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.406980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.407110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.407122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.407300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.407546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.407555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.407825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.408082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.408093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.408297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.408486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.408495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.408740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.408862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.408871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.409010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.409223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.409233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.663 qpair failed and we were unable to recover it. 00:30:18.663 [2024-10-07 07:49:22.409414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.663 [2024-10-07 07:49:22.409550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.409559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.409705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.409814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.409824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.409945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.410093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.410103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.410227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.410411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.410420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.410669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.410811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.410823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.411007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.411184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.411194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.411328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.411450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.411460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.411638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.411791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.411801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.411998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.412205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.412215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.412356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.412475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.412484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.412684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.412803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.412814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.413056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.413246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.413256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.413482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.413729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.413739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.413867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.414055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.414068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.414258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.414437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.414464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.414681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.414806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.414816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.415012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.415209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.415219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.415337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.415476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.415486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.415677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.415858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.415867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.415962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.416139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.416149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.416365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.416556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.416566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.416707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.416883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.416893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.417017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.417221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.417231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.417425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.417608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.417618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.417872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.418006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.418018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.418240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.418362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.418372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.418614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.418794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.418803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.419070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.419212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.419222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.419355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.419546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.419555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.664 qpair failed and we were unable to recover it. 00:30:18.664 [2024-10-07 07:49:22.419757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.664 [2024-10-07 07:49:22.419954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.419964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.420143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.420281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.420292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.420491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.420702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.420711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.420832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.420940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.420949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.421138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.421381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.421390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.421591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.421769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.421779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.421898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.422036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.422045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.422179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.422372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.422382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.422515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.422689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.422699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.422835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.422972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.422982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.423110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.423195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.423205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.423344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.423591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.423601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.423782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.424024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.424033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.424228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.424347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.424357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.424545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.424724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.424734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.424857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.425043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.425053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.425218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.425407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.425416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.425603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.425721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.425730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.425846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.426029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.426039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.426354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.426461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.426471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.426656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.426789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.426799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.426998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.427138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.427149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.427279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.427526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.427536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.427733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.427928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.427938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.428089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.428216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.428226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.428444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.428565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.428576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.428694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.428880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.428890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.429083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.429285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.429294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.665 qpair failed and we were unable to recover it. 00:30:18.665 [2024-10-07 07:49:22.429482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.429665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.665 [2024-10-07 07:49:22.429675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.429851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.429972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.429982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.430246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.430492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.430501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.430701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.430969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.430979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.431161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.431364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.431374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.431621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.431752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.431762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.431952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.432131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.432141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.432413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.432543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.432553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.432808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.432996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.433006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.433138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.433346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.433356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.433576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.433714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.433724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.433898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.434074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.434085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.434227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.434362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.434371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.434587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.434718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.434728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.434924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.435065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.435075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.435185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.435387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.435397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.435580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.435765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.435775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.435913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.436106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.436117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.436242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.436438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.436448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.436720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.436900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.436910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.437117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.437306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.437316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.437439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.437634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.437644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.437851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.437971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.437981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.438121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.438249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.438260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.438451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.438573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.438582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.438764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.438908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.438918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.666 [2024-10-07 07:49:22.439164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.439356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.666 [2024-10-07 07:49:22.439366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.666 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.439583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.439694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.439704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.439895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.440166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.440177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.440297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.440504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.440514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.440640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.440825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.440834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.441012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.441219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.441229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.441435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.441653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.441663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.441861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.441985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.441995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.442187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.442366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.442375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.442572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.442773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.442783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.442923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.443063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.443074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.443276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.443475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.443485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.443678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.443854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.443864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.444075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.444198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.444208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.444463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.444651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.444661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.444908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.445129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.445139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.445318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.445535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.445544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.445750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.445867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.445876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.446064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.446235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.446244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.446433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.446629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.446639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.446833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.447030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.447040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.447231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.447407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.447417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.447597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.447735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.447745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.447922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.448099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.448109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.448314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.448507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.448517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.448771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.448954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.448964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.449106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.449350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.449360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.449605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.449801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.449810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.450057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.450291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.450301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.667 qpair failed and we were unable to recover it. 00:30:18.667 [2024-10-07 07:49:22.450512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.450653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.667 [2024-10-07 07:49:22.450663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.450795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.450905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.450915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.451071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.451200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.451211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.451337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.451518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.451529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.451777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.451911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.451921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.452054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.452181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.452191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.452330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.452452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.452461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.452637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.452757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.452767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.452842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.452961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.452971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.453102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.453312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.453322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.453516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.453766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.453776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.453915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.454033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.454043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.454172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.454364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.454374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.454593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.454743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.454753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.454937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.455232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.455243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.455382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.455490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.455500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.455703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.455845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.455855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.456079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.456277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.456288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.456414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.456628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.456639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.456834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.457033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.457043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.457187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.457377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.457388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.457508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.457698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.457708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.457832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.458016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.458026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.458203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.458333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.458343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.458523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.458654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.458664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.458858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.459052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.459073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.459192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.459457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.459467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.459595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.459787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.459797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.459930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.460123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.460133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.668 qpair failed and we were unable to recover it. 00:30:18.668 [2024-10-07 07:49:22.460232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.460405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.668 [2024-10-07 07:49:22.460415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.460656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.460793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.460803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.460924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.461040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.461050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.461182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.461392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.461402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.461624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.461764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.461774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.461964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.462094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.462103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.462257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.462388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.462398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.462548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.462791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.462801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.462921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.463102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.463112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.463366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.463556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.463566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.463816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.463955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.463964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.464161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.464306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.464316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.464463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.464592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.464601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.464799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.464986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.464995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.465243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.465439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.465451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.465631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.465871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.465881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.466087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.466215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.466225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.466358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.466602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.466612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.466835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.467035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.467045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.467292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.467482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.467492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.467613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.467859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.467869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.468007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.468200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.468211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.468478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.468675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.468686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.468815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.469013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.469023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.469295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.469496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.469510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.469668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.469913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.469922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.470105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.470304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.470314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.470432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.470571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.470581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.470858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.471074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.471085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.669 qpair failed and we were unable to recover it. 00:30:18.669 [2024-10-07 07:49:22.471273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.471481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.669 [2024-10-07 07:49:22.471490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.471629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.471758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.471768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.471958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.472145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.472155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.472366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.472493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.472503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.472620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.472752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.472762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.472947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.473071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.473084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.473257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.473392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.473402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.473540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.473792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.473802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.473917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.474165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.474176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.474302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.474442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.474452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.474614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.474722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.474732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.474976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.475174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.475184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.475296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.475484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.475494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.475631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.475824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.475834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.475973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.476177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.476188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.476333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.476446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.476456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.476585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.476767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.476776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.476934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.477067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.477077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.477203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.477394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.477404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.477520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.477645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.477655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.477906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.478148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.478158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.478257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.478388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.478398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.478599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.478782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.478793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.478901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.479012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.479021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.479267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.479474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.479484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.479598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.479740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.479750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.670 qpair failed and we were unable to recover it. 00:30:18.670 [2024-10-07 07:49:22.480013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.670 [2024-10-07 07:49:22.480142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.480153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.480356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.480487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.480497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.480697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.480900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.480910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.481129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.481339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.481350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.481513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.481688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.481698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.481811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.482002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.482012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.482141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.482411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.482421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.482605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.482821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.482831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.482961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.483149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.483159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.483286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.483431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.483440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.483635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.483813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.483823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.483958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.484133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.484144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.484388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.484611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.484621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.484755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.484953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.484963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.485146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.485417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.485427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.485605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.485898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.485908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.486100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.486199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.486209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.486352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.486555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.486565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.486778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.486890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.486900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.487029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.487299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.487309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.487456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.487664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.487673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.487823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.488005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.488015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.488141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.488301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.488311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.488578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.488774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.488784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.488911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.489125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.489136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.489278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.489521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.489531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.489729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.489870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.489880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.490156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.490291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.490301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.490477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.490600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.490610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.671 qpair failed and we were unable to recover it. 00:30:18.671 [2024-10-07 07:49:22.490731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.671 [2024-10-07 07:49:22.490941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.490951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.491067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.491245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.491255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.491454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.491573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.491584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.491775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.491931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.491941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.492028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.492146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.492156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.492342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.492540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.492550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.492743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.492949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.492959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.493046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.493175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.493185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.493305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.493520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.493530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.493655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.493872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.493882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.494062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.494324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.494334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.494469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.494716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.494727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.494869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.495019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.495029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.495207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.495402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.495413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.495548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.495759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.495769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.495986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.496162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.496173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.496381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.496500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.496510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.496641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.496765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.496775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.496850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.497040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.497050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.497296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.497473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.497483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.497611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.497703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.497712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.497842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.498106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.498116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.498236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.498448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.498458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.498581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.498771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.498780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.499112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.499287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.499297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.499546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.499745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.499754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.499868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.500006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.500016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.500247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.500472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.500482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.500562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.500683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.500692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.672 qpair failed and we were unable to recover it. 00:30:18.672 [2024-10-07 07:49:22.500938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.672 [2024-10-07 07:49:22.501117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.501127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.501259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.501371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.501380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.501495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.501687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.501697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.501905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.502021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.502031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.502275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.502502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.502512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.502789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.502936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.502945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.503072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.503315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.503325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.503415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.503530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.503540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.503808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.503916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.503926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.504135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.504274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.504284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.504529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.504649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.504658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.504796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.504949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.504958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.505084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.505191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.505201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.505458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.505587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.505597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.505687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.505888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.505898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.506147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.506322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.506332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.506422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.506636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.506646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.506789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.506928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.506938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.507115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.507255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.507265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.507543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.507673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.507683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.507875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.508079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.508090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.508348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.508416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.508426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.508621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.508834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.508844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.508971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.509151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.509162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.509288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.509419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.509429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.509627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.509804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.509813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.510015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.510155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.510165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.510319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.510443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.510452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.510607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.510781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.673 [2024-10-07 07:49:22.510791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.673 qpair failed and we were unable to recover it. 00:30:18.673 [2024-10-07 07:49:22.511037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.511192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.511203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.511344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.511524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.511533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.511723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.511974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.511983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.512131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.512313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.512323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.512596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.512721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.512731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.512919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.513052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.513064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.513246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.513373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.513383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.513573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.513815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.513825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.514015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.514148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.514158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.514357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.514481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.514491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.514686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.514885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.514895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.515099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.515226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.515236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.515525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.515614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.515624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.515794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.515951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.515961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.516099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.516290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.516299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.516424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.516623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.516632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.516818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.516948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.516958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.517086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.517332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.517343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.517567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.517745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.517755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.517894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.518035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.518046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.518296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.518436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.518446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.518645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.518891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.518901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.519092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.519223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.519233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.519358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.519547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.519561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.519842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.520113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.520123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.520338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.520516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.520526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.520655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.520798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.520808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.521067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.521311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.521321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.674 qpair failed and we were unable to recover it. 00:30:18.674 [2024-10-07 07:49:22.521500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.521682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.674 [2024-10-07 07:49:22.521692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.521818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.522062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.522073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.522201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.522468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.522479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.522693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.522813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.522823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.522994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.523184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.523194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.523438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.523711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.523725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.523989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.524186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.524196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.524380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.524511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.524521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.524754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.524892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.524902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.525089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.525199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.525209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.525480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.525729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.525739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.525881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.526035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.526044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.526266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.526488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.526498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.526624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.526889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.526899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.527126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.527254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.527265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.527460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.527664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.527677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.527822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.527961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.527971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.528185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.528456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.528467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.528675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.528799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.528809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.528939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.529054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.529067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.529249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.529392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.529402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.529608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.529789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.529799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.529927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.530101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.530111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.530329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.530545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.530555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.530755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.530908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.530919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.531189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.531408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.531419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.675 [2024-10-07 07:49:22.531563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.531781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.675 [2024-10-07 07:49:22.531790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.675 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.531870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.532046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.532056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.532245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.532419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.532429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.532546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.532636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.532646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.532896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.533144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.533154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.533279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.533434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.533443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.533691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.533879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.533889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.534049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.534249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.534259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.534462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.534663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.534674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.534764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.534938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.534948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.535143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.535319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.535329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.535468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.535613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.535623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.535712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.535956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.535967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.536156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.536274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.536285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.536464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.536657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.536666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.536808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.536952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.536961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.537236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.537373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.537383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.537574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.537649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.537659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.537781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.537872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.537881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.538056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.538252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.538262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.538558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.538666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.538676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.538924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.539126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.539137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.539332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.539488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.539498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.539604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.539784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.539794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.540076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.540221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.540231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.540428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.540680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.540690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.540829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.541072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.541083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.541260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.541455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.541465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.541657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.541851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.541861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.676 [2024-10-07 07:49:22.541998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.542138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.676 [2024-10-07 07:49:22.542149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.676 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.542341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.542425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.542435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.542548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.542761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.542770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.542962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.543081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.543091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.543272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.543447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.543456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.543585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.543727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.543737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.543988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.544237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.544247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.544363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.544624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.544634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.544826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.545056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.545069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.545161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.545270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.545280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.545395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.545589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.545599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.545728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.545905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.545915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.546175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.546366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.546376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.546643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.546780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.546790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.546931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.547045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.547054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.547265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.547522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.547532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.547715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.547813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.547822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.548020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.548284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.548294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.548472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.548616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.548626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.548815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.548906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.548915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.549012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.549213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.549223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.549442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.549697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.549707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.549965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.550160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.550171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.550352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.550508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.550518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.550707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.550826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.550837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.551014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.551285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.551295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.551367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.551571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.551581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.551696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.551945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.551956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.552070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.552211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.552221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.677 [2024-10-07 07:49:22.552419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.552553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.677 [2024-10-07 07:49:22.552563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.677 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.552692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.552914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.552924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.553120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.553299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.553309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.553496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.553678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.553688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.553801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.554019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.554029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.554224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.554400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.554409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.554539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.554727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.554736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.555021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.555305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.555315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.555561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.555753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.555763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.555884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.555956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.555966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.556224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.556348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.556358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.556608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.556741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.556751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.556882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.557007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.557017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.557218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.557348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.557359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.557567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.557711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.557721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.557816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.557994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.558004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.558085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.558220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.558230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.558443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.558622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.558631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.558831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.558962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.558972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.559218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.559409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.559419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.559609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.559745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.559755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.560003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.560123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.560134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.560292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.560470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.560479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.560654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.560834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.560843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.560927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.561055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.561074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.561183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.561362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.561372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.561507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.561775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.561784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.562080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.562271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.562282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.562466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.562574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.562584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.678 [2024-10-07 07:49:22.562711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.562887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.678 [2024-10-07 07:49:22.562898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.678 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.563026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.563269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.563280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.563407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.563525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.563536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.563808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.564006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.564016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.564193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.564303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.564313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.564561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.564747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.564757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.564887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.565132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.565143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.565270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.565378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.565388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.565662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.565786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.565796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.565931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.566195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.566206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.566330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.566525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.566534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.566784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.566871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.566881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.567067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.567241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.567251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.567466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.567665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.567675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.567922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.568062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.568072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.568260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.568449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.568459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.568649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.568838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.568848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.569028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.569301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.569312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.569437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.569626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.569637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.569831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.570005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.570015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.570289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.570496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.570507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.570637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.570842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.570853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.570945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.571237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.571248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.571391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.571534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.571543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.571793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.572038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.572048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.572149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.572365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.572376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.679 qpair failed and we were unable to recover it. 00:30:18.679 [2024-10-07 07:49:22.572570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.679 [2024-10-07 07:49:22.572775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.572785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.573063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.573159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.573169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.573360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.573482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.573492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.573766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.573970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.573980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.574163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.574358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.574369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.574514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.574704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.574714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.574906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.575070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.575081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.575264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.575480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.575491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.575741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.575989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.575999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.576118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.576301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.576311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.576570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.576774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.576786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.577063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.577238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.577249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.577436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.577716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.577726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.577881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.577994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.578004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.578165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.578276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.578286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.578471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.578586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.578602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.578728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.578936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.578947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.579235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.579432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.579447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.579577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.579845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.579855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.580054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.580190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.580201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.580405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.580531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.580541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.580734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.580862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.580873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.581143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.581341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.581351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.581486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.581612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.581621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.581835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.582083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.582094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.582229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.582416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.582426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.582647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.582793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.582803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.582941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.583062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.583075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.583327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.583570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.583581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.680 qpair failed and we were unable to recover it. 00:30:18.680 [2024-10-07 07:49:22.583687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.680 [2024-10-07 07:49:22.583762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.583773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.583971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.584069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.584080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.584350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.584528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.584538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.584642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.584780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.584790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.584940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.585076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.585087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.585291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.585479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.585489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.585686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.585936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.585946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.586146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.586288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.586298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.586473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.586583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.586596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.586809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.586921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.586931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.587072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.587255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.587267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.587455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.587650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.587661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.587906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.588102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.588113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.588325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.588449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.588460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.588719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.588900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.588911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.589108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.589214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.589224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.589353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.589465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.589475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.589688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.589811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.589821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.590020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.590217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.590230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.590412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.590611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.590621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.590762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.590920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.590930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.591160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.591301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.591312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.591471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.591666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.591677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.591859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.591999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.592009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.592151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.592338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.592348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.592591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.592783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.592793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.592919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.593041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.593050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.593234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.593315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.593325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.593570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.593796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.593807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.681 qpair failed and we were unable to recover it. 00:30:18.681 [2024-10-07 07:49:22.593996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.681 [2024-10-07 07:49:22.594145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.594156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.682 qpair failed and we were unable to recover it. 00:30:18.682 [2024-10-07 07:49:22.594280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.594472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.594482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.682 qpair failed and we were unable to recover it. 00:30:18.682 [2024-10-07 07:49:22.594618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.594822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.594833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.682 qpair failed and we were unable to recover it. 00:30:18.682 [2024-10-07 07:49:22.594948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.595141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.595152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.682 qpair failed and we were unable to recover it. 00:30:18.682 [2024-10-07 07:49:22.595345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.595472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.595482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.682 qpair failed and we were unable to recover it. 00:30:18.682 [2024-10-07 07:49:22.595757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.595975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.595986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.682 qpair failed and we were unable to recover it. 00:30:18.682 [2024-10-07 07:49:22.596119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.596370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.596381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.682 qpair failed and we were unable to recover it. 00:30:18.682 [2024-10-07 07:49:22.596561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.596762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.596773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.682 qpair failed and we were unable to recover it. 00:30:18.682 [2024-10-07 07:49:22.596958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.597082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.597093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.682 qpair failed and we were unable to recover it. 00:30:18.682 [2024-10-07 07:49:22.597224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.597414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.597424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.682 qpair failed and we were unable to recover it. 00:30:18.682 [2024-10-07 07:49:22.597566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.597694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.597705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.682 qpair failed and we were unable to recover it. 00:30:18.682 [2024-10-07 07:49:22.597951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.598139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.682 [2024-10-07 07:49:22.598151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.682 qpair failed and we were unable to recover it. 00:30:18.682 [2024-10-07 07:49:22.598279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.598493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.598504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.955 qpair failed and we were unable to recover it. 00:30:18.955 [2024-10-07 07:49:22.598644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.598833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.598844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.955 qpair failed and we were unable to recover it. 00:30:18.955 [2024-10-07 07:49:22.599006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.599134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.599145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.955 qpair failed and we were unable to recover it. 00:30:18.955 [2024-10-07 07:49:22.599363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.599483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.599493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.955 qpair failed and we were unable to recover it. 00:30:18.955 [2024-10-07 07:49:22.599611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.599812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.599823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.955 qpair failed and we were unable to recover it. 00:30:18.955 [2024-10-07 07:49:22.600045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.600294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.600304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.955 qpair failed and we were unable to recover it. 00:30:18.955 [2024-10-07 07:49:22.600499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.600725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.600735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.955 qpair failed and we were unable to recover it. 00:30:18.955 [2024-10-07 07:49:22.600844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.601103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.601114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.955 qpair failed and we were unable to recover it. 00:30:18.955 [2024-10-07 07:49:22.601313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.601497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.601507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.955 qpair failed and we were unable to recover it. 00:30:18.955 [2024-10-07 07:49:22.601770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.601962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.601972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.955 qpair failed and we were unable to recover it. 00:30:18.955 [2024-10-07 07:49:22.602160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.602346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.602357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.955 qpair failed and we were unable to recover it. 00:30:18.955 [2024-10-07 07:49:22.602565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.602757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.602768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.955 qpair failed and we were unable to recover it. 00:30:18.955 [2024-10-07 07:49:22.603017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.603213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.955 [2024-10-07 07:49:22.603224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.955 qpair failed and we were unable to recover it. 00:30:18.955 [2024-10-07 07:49:22.603455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.603702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.603712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.603837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.604106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.604121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.604236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.604459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.604469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.604685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.604892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.604903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.605039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.605162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.605173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.605367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.605580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.605590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.605765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.605886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.605897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.606028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.606200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.606212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.606467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.606637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.606647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.606760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.606932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.606943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.607133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.607263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.607274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.607465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.607556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.607567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.607867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.608100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.608110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.608333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.608587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.608597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.608824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.609015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.609025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.609236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.609450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.609460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.609708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.609966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.609976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.610170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.610295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.610305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.610526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.610771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.610782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.610927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.611063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.611073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.611271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.611462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.611473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.611666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.611795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.611805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.612010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.612226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.612237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.612417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.612593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.612603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.612717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.612858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.612868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.613008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.613215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.613226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.613425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.613542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.613553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.613803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.614053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.614068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.956 [2024-10-07 07:49:22.614248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.614384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.956 [2024-10-07 07:49:22.614395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.956 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.614665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.614841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.614851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.615066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.615201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.615212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.615406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.615544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.615554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.615746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.615854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.615865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.616112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.616240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.616250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.616338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.616450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.616465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.616663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.616872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.616882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.617029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.617234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.617246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.617362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.617499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.617509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.617688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.617816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.617826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.617899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.618073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.618083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.618227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.618366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.618376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.618646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.618914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.618924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.619124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.619313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.619324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.619451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.619659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.619669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.619872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.620158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.620170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.620335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.620464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.620475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.620586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.620832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.620842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.621057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.621188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.621199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.621396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.621538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.621548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.621834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.622084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.622095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.622275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.622467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.622477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.622571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.622706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.622717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.622910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.623111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.623123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.623265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.623385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.623395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.623594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.623771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.623782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.623960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.624087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.624098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.624316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.624514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.957 [2024-10-07 07:49:22.624525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.957 qpair failed and we were unable to recover it. 00:30:18.957 [2024-10-07 07:49:22.624706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.624816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.624827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.625099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.625298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.625308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.625493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.625614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.625625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.625872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.626066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.626077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.626275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.626397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.626407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.626592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.626786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.626796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.627042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.627230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.627241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.627423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.627599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.627610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.627865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.628111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.628122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.628310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.628468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.628478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.628602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.628744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.628754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.628999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.629225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.629235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.629371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.629572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.629584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.629761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.629898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.629909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.630044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.630224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.630235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.630413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.630619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.630629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.630908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.631006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.631015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.631212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.631401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.631411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.631654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.631851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.631861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.631992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.632117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.632127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.632398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.632515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.632525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.632716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.632940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.632950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.633076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.633201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.633211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.633436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.633628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.633638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.633767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.634013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.634023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.634155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.634399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.634409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.634552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.634735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.634745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.634994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.635301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.635311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.958 qpair failed and we were unable to recover it. 00:30:18.958 [2024-10-07 07:49:22.635507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.958 [2024-10-07 07:49:22.635717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.635727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.635822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.636018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.636027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.636225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.636336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.636346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.636480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.636586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.636596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.636809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.636926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.636935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.637132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.637330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.637340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.637539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.637678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.637688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.637886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.638084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.638094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.638249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.638386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.638395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.638522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.638793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.638803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.638979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.639171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.639183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.639327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.639569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.639579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.639772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.639954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.639964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.640147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.640243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.640253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.640381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.640555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.640564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.640715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.640840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.640850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.641041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.641168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.641179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.641305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.641430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.641440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.641563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.641831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.641841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.641968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.642212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.642222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.642411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.642588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.642600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.642791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.642916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.642926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.643001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.643074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.643084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.643175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.643288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.643297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.643490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.643606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.643615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.643754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.643950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.643959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.644219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.644328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.644338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.644543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.644738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.644748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.644874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.645144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.645154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.959 qpair failed and we were unable to recover it. 00:30:18.959 [2024-10-07 07:49:22.645427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.959 [2024-10-07 07:49:22.645545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.645555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.645689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.645932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.645944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.646136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.646381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.646391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.646586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.646786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.646796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.646992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.647259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.647269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.647461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.647590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.647599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.647785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.647962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.647971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.648168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.648281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.648292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.648425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.648612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.648622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.648772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.648980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.648989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.649190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.649339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.649348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.649639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.649829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.649839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.650089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.650212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.650222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.650365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.650623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.650633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.650758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.650877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.650886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.650997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.651267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.651277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.651528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.651743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.651753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.651951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.652088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.652098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.652292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.652465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.652475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.652659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.652930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.652940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.653052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.653202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.653212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.653424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.653719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.653729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.653954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.654090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.654100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.960 [2024-10-07 07:49:22.654348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.654482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.960 [2024-10-07 07:49:22.654491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.960 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.654693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.654893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.654919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.655191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.655332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.655343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.655546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.655789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.655800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.655951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.656083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.656094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.656286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.656486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.656496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.656615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.656799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.656809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.657003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.657178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.657189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.657328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.657534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.657543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.657677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.657857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.657867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.658043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.658239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.658249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.658386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.658680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.658689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.658933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.659133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.659143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.659280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.659471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.659481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.659659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.659841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.659851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.659976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.660222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.660233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.660364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.660478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.660488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.660613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.660760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.660770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.660873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 07:49:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:18.961 [2024-10-07 07:49:22.661073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.661085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.661258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 07:49:22 -- common/autotest_common.sh@852 -- # return 0 00:30:18.961 [2024-10-07 07:49:22.661411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.661422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.661614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 07:49:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:18.961 [2024-10-07 07:49:22.661698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.661710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.661822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 07:49:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:18.961 [2024-10-07 07:49:22.662052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.662073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 07:49:22 -- common/autotest_common.sh@10 -- # set +x 00:30:18.961 [2024-10-07 07:49:22.662253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.662446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.662456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.662607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.662903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.662913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.663023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.663220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.663230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.663427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.663621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.663632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.663767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.663958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.663968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.664218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.664353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.664365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.961 qpair failed and we were unable to recover it. 00:30:18.961 [2024-10-07 07:49:22.664486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.664686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.961 [2024-10-07 07:49:22.664696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.664902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.665045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.665055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.665247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.665350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.665361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.665466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.665712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.665722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.665920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.666126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.666137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.666264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.666461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.666471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.666670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.666782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.666792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.667064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.667156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.667167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.667437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.667514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.667524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.667633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.667842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.667852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.668054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.668180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.668191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.668329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.668450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.668460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.668621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.668737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.668747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.668888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.669136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.669146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.669273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.669476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.669486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.669631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.669773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.669784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.669994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.670174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.670185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.670361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.670485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.670495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.670627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.670809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.670820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.671019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.671145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.671156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.671333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.671450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.671461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.671573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.671717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.671727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.671931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.672129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.672140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.672257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.672408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.672418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.672613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.672746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.672756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.672938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.673121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.673131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.673229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.673472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.673483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.673679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.673807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.673817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.673948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.674073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.674083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.962 qpair failed and we were unable to recover it. 00:30:18.962 [2024-10-07 07:49:22.674262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.674450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.962 [2024-10-07 07:49:22.674460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.674627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.674756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.674766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.674896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.675142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.675153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.675346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.675441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.675451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.675569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.675690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.675700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.675833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.676008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.676018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.676214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.676349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.676359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.676543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.676668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.676678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.676873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.677083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.677094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.677209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.677410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.677420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.677543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.677814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.677825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.677947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.678066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.678079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.678202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.678315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.678325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.678535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.678755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.678766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.678904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.679046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.679057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.679298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.679420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.679431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.679710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.679846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.679856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.680001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.680170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.680182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.680456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.680646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.680656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.680851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.680981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.680991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.681193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.681388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.681398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.681586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.681705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.681718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.681844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.682021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.682031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.682209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.682440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.682450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.682628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.682761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.682772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.682883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.683003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.683013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.683205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.683356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.683367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.683610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.683810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.683820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.683953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.684090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.684102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.963 qpair failed and we were unable to recover it. 00:30:18.963 [2024-10-07 07:49:22.684227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.684349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.963 [2024-10-07 07:49:22.684358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.684469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.684579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.684589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.684719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.684810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.684823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.684939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.685053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.685068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.685199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.685445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.685455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.685636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.685831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.685841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.685968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.686079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.686090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.686278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.686401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.686412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.686533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.686716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.686726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.686908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.687028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.687039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.687322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.687450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.687460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.687626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.687757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.687767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.687961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.688154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.688168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.688412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.688525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.688535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.688660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.688859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.688869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.688982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.689111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.689121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.689248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.689372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.689382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.689511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.689708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.689718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.689844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.689963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.689974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.690161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.690334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.690344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.690463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.690604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.690615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.690726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.690897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.690907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.691023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.691163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.691174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.691413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.691605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.691615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.691751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.691880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.691891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.964 [2024-10-07 07:49:22.692014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.692106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.964 [2024-10-07 07:49:22.692117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.964 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.692246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.692373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.692382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.692494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.692673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.692684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.692825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.693015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.693026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.693214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.693340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.693351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.693461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.693717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.693727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.693865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.693954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.693965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.694215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.694329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.694339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.694467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.694595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.694606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.694747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.694877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.694887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.694992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.695111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.695122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.695258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 07:49:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.965 [2024-10-07 07:49:22.695403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.695415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.695532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 07:49:22 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:18.965 [2024-10-07 07:49:22.695642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.695654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.695778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.695886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.695898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 07:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.965 [2024-10-07 07:49:22.696080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 07:49:22 -- common/autotest_common.sh@10 -- # set +x 00:30:18.965 [2024-10-07 07:49:22.696293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.696305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.696422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.696622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.696632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.696762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.696872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.696883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.697081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.697337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.697348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.697469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.697607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.697619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.697754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.697851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.697861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.698048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.698169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.698180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.698268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.698463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.698474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.698597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.698719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.698730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.698912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.699047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.699057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.699190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.699359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.699369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.699504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.699692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.699702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.699836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.699955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.699965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.700149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.700331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.700341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.700523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.700641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.965 [2024-10-07 07:49:22.700650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.965 qpair failed and we were unable to recover it. 00:30:18.965 [2024-10-07 07:49:22.700761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.700870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.700881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.701064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.701253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.701263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.701381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.701638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.701648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.701775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.701969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.701980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.702161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.702292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.702302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.702443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.702605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.702616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.702761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.702892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.702903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.703032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.703159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.703170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.703304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.703487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.703498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.703613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.703804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.703815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.703997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.704176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.704187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.704285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.704475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.704485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.704603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.704824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.704835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.704952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.705075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.705087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.705198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.705390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.705401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.705598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.705743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.705753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.705872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.705987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.705997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.706117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.706316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.706327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.706442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.706563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.706573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.706762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.706961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.706972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.707150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.707266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.707276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.707464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.707595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.707608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.707733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.707876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.707887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.708021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.708162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.708173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.708313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.708433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.708443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.708562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.708698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.708709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.708824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.708952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.708962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.709084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.709268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.709278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.709405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.709595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.966 [2024-10-07 07:49:22.709606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.966 qpair failed and we were unable to recover it. 00:30:18.966 [2024-10-07 07:49:22.709785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.710002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.710012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.710190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.710309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.710319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.710442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.710568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.710579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.710712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.710888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.710899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.711096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.711216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.711227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.711497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.711615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.711626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.711752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.711870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.711881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.712064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.712188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.712198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.712339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.712456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.712466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.712585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.712694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.712705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.712885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.713067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.713077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.713202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.713401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.713411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.713544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.713667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.713677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 Malloc0 00:30:18.967 [2024-10-07 07:49:22.713816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.713939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.713950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.714079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.714194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.714204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.714322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.714431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 07:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.967 [2024-10-07 07:49:22.714441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.714561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.714759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.714770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 07:49:22 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:18.967 [2024-10-07 07:49:22.714886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.715002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.715012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 07:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.967 [2024-10-07 07:49:22.715197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.715325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.715337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 07:49:22 -- common/autotest_common.sh@10 -- # set +x 00:30:18.967 [2024-10-07 07:49:22.715540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.715753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.715764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.715884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.715994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.716004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.716120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.716249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.716259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.716442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.716626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.716637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.716865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.717065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.717076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.717207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.717340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.717350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.717596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.717736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.717747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.717881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.718018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.718028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.967 qpair failed and we were unable to recover it. 00:30:18.967 [2024-10-07 07:49:22.718149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.967 [2024-10-07 07:49:22.718273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.718283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.718466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.718651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.718661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.718793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.718923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.718933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.719079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.719256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.719267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.719441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.719562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.719572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.719689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.719896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.719907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.720099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.720232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.720243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.720424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.720537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.720548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.720667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.720917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.720928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.721106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.721261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.721272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.721296] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.968 [2024-10-07 07:49:22.721386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.721583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.721595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.721801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.722000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.722012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.722136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.722326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.722337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.722524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.722625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.722635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.722830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.723012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.723023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.723154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.723280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.723290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.723563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.723688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.723698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.723823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.724017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.724029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.724151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.724259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.724270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.724402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.724518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.724529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.724656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.724785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.724796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.724952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.725127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.725138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.725249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.725379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.725390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.725525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.725706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.725717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.725840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.725969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.725979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.726086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.726270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.726280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.726460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.726597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.726607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.726789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.726969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.726979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.727152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.727251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.968 [2024-10-07 07:49:22.727262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.968 qpair failed and we were unable to recover it. 00:30:18.968 [2024-10-07 07:49:22.727445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.727611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.727621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.727760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.727937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.727947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.728069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.728279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.728290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.728411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.728535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.728545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.728676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.728779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.728790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.729086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.729193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.729203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.729329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.729469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.729480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.729733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.729858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.729868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 07:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.969 [2024-10-07 07:49:22.729991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.730076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.730086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.730245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 07:49:22 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:18.969 [2024-10-07 07:49:22.730362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.730372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.730498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 07:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.969 [2024-10-07 07:49:22.730676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.730687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.730818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 07:49:22 -- common/autotest_common.sh@10 -- # set +x 00:30:18.969 [2024-10-07 07:49:22.730931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.730941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.731074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.731259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.731271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.731381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.731504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.731514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.731625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.731740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.731750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.731889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.732072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.732084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.732196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.732313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.732324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.732495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.732617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.732628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.732846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.732930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.732941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.733126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.733321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.733332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.733582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.733673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.733683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.733822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.733948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.733958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfb0000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.734186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.734365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.734386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.969 [2024-10-07 07:49:22.734598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.734722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.969 [2024-10-07 07:49:22.734738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.969 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.734884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.735121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.735138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.735426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.735622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.735638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.735938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.736079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.736095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.736225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.736378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.736393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.736541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.736751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.736766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.736909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.737141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.737157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.737294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.737503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.737518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.737709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.737864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.737880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 07:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.970 [2024-10-07 07:49:22.738033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.738186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.738202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 07:49:22 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:18.970 [2024-10-07 07:49:22.738347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.738607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.738623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 07:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.970 [2024-10-07 07:49:22.738826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 07:49:22 -- common/autotest_common.sh@10 -- # set +x 00:30:18.970 [2024-10-07 07:49:22.738965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.738981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.739116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.739249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.739264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.739543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.739664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.739679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.739827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.739966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.739981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.740187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.740332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.740354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.740502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.740693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.740707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.740947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.741030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.741045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.741240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.741432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.741452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.741725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.741864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.741880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.742139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.742345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.742361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.742578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.742700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.742714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.742862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.743071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.743087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.743357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.743556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.743571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.743780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.743923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.743938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.744258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.744465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.744481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.744686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.744915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.744930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.745141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.745281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.745297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.970 [2024-10-07 07:49:22.745433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.745600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.970 [2024-10-07 07:49:22.745618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.970 qpair failed and we were unable to recover it. 00:30:18.971 [2024-10-07 07:49:22.745855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.746072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.746088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 07:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.971 [2024-10-07 07:49:22.746222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.746364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.746379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 07:49:22 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.971 [2024-10-07 07:49:22.746523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.746709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.746724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 07:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.971 [2024-10-07 07:49:22.746921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 07:49:22 -- common/autotest_common.sh@10 -- # set +x 00:30:18.971 [2024-10-07 07:49:22.747081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.747099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 [2024-10-07 07:49:22.747291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.747455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.747470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 [2024-10-07 07:49:22.747679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.747824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.747839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 [2024-10-07 07:49:22.748037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.748233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.748249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 [2024-10-07 07:49:22.748443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.748592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.748607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 [2024-10-07 07:49:22.748745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.748863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.748878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 [2024-10-07 07:49:22.749163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.749348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.971 [2024-10-07 07:49:22.749363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbfac000b90 with addr=10.0.0.2, port=4420 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 [2024-10-07 07:49:22.749546] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.971 [2024-10-07 07:49:22.751860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.971 [2024-10-07 07:49:22.751969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.971 [2024-10-07 07:49:22.751995] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.971 [2024-10-07 07:49:22.752008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.971 [2024-10-07 07:49:22.752019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.971 [2024-10-07 07:49:22.752047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 07:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.971 07:49:22 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:18.971 07:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:18.971 07:49:22 -- common/autotest_common.sh@10 -- # set +x 00:30:18.971 [2024-10-07 07:49:22.761749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.971 [2024-10-07 07:49:22.761838] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.971 [2024-10-07 07:49:22.761857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.971 [2024-10-07 07:49:22.761866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.971 [2024-10-07 07:49:22.761873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.971 [2024-10-07 07:49:22.761893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 07:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:18.971 07:49:22 -- host/target_disconnect.sh@58 -- # wait 106466 00:30:18.971 [2024-10-07 07:49:22.771806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.971 [2024-10-07 07:49:22.771929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.971 [2024-10-07 07:49:22.771945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.971 [2024-10-07 07:49:22.771952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.971 [2024-10-07 07:49:22.771959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.971 [2024-10-07 07:49:22.771976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 [2024-10-07 07:49:22.781782] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.971 [2024-10-07 07:49:22.781861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.971 [2024-10-07 07:49:22.781885] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.971 [2024-10-07 07:49:22.781892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.971 [2024-10-07 07:49:22.781898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.971 [2024-10-07 07:49:22.781915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 [2024-10-07 07:49:22.791768] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.971 [2024-10-07 07:49:22.791849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.971 [2024-10-07 07:49:22.791864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.971 [2024-10-07 07:49:22.791871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.971 [2024-10-07 07:49:22.791880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.971 [2024-10-07 07:49:22.791896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 [2024-10-07 07:49:22.801779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.971 [2024-10-07 07:49:22.801866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.971 [2024-10-07 07:49:22.801881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.971 [2024-10-07 07:49:22.801888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.971 [2024-10-07 07:49:22.801894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.971 [2024-10-07 07:49:22.801910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 [2024-10-07 07:49:22.811955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.971 [2024-10-07 07:49:22.812026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.971 [2024-10-07 07:49:22.812041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.971 [2024-10-07 07:49:22.812048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.971 [2024-10-07 07:49:22.812053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.971 [2024-10-07 07:49:22.812073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.971 qpair failed and we were unable to recover it. 00:30:18.971 [2024-10-07 07:49:22.821840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.971 [2024-10-07 07:49:22.821930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.971 [2024-10-07 07:49:22.821946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.971 [2024-10-07 07:49:22.821953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.971 [2024-10-07 07:49:22.821959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.972 [2024-10-07 07:49:22.821978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.972 qpair failed and we were unable to recover it. 00:30:18.972 [2024-10-07 07:49:22.831877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.972 [2024-10-07 07:49:22.831949] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.972 [2024-10-07 07:49:22.831963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.972 [2024-10-07 07:49:22.831970] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.972 [2024-10-07 07:49:22.831976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.972 [2024-10-07 07:49:22.831991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.972 qpair failed and we were unable to recover it. 00:30:18.972 [2024-10-07 07:49:22.841916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.972 [2024-10-07 07:49:22.841984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.972 [2024-10-07 07:49:22.841999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.972 [2024-10-07 07:49:22.842005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.972 [2024-10-07 07:49:22.842011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.972 [2024-10-07 07:49:22.842027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.972 qpair failed and we were unable to recover it. 00:30:18.972 [2024-10-07 07:49:22.851906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.972 [2024-10-07 07:49:22.852032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.972 [2024-10-07 07:49:22.852047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.972 [2024-10-07 07:49:22.852054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.972 [2024-10-07 07:49:22.852064] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.972 [2024-10-07 07:49:22.852080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.972 qpair failed and we were unable to recover it. 00:30:18.972 [2024-10-07 07:49:22.861997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.972 [2024-10-07 07:49:22.862077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.972 [2024-10-07 07:49:22.862092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.972 [2024-10-07 07:49:22.862098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.972 [2024-10-07 07:49:22.862105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.972 [2024-10-07 07:49:22.862120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.972 qpair failed and we were unable to recover it. 00:30:18.972 [2024-10-07 07:49:22.872046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.972 [2024-10-07 07:49:22.872124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.972 [2024-10-07 07:49:22.872142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.972 [2024-10-07 07:49:22.872149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.972 [2024-10-07 07:49:22.872156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.972 [2024-10-07 07:49:22.872171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.972 qpair failed and we were unable to recover it. 00:30:18.972 [2024-10-07 07:49:22.881987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.972 [2024-10-07 07:49:22.882056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.972 [2024-10-07 07:49:22.882073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.972 [2024-10-07 07:49:22.882080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.972 [2024-10-07 07:49:22.882086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.972 [2024-10-07 07:49:22.882102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.972 qpair failed and we were unable to recover it. 00:30:18.972 [2024-10-07 07:49:22.892110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.972 [2024-10-07 07:49:22.892176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.972 [2024-10-07 07:49:22.892191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.972 [2024-10-07 07:49:22.892197] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.972 [2024-10-07 07:49:22.892203] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.972 [2024-10-07 07:49:22.892219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.972 qpair failed and we were unable to recover it. 00:30:18.972 [2024-10-07 07:49:22.902150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.972 [2024-10-07 07:49:22.902221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.972 [2024-10-07 07:49:22.902236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.972 [2024-10-07 07:49:22.902243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.972 [2024-10-07 07:49:22.902249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:18.972 [2024-10-07 07:49:22.902265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.972 qpair failed and we were unable to recover it. 00:30:19.233 [2024-10-07 07:49:22.912177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.233 [2024-10-07 07:49:22.912248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.233 [2024-10-07 07:49:22.912262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.233 [2024-10-07 07:49:22.912269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.233 [2024-10-07 07:49:22.912275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.233 [2024-10-07 07:49:22.912294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-10-07 07:49:22.922140] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.233 [2024-10-07 07:49:22.922215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.233 [2024-10-07 07:49:22.922237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.233 [2024-10-07 07:49:22.922244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.233 [2024-10-07 07:49:22.922254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.233 [2024-10-07 07:49:22.922270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.233 qpair failed and we were unable to recover it. 00:30:19.233 [2024-10-07 07:49:22.932208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.233 [2024-10-07 07:49:22.932284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.233 [2024-10-07 07:49:22.932304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.233 [2024-10-07 07:49:22.932311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.233 [2024-10-07 07:49:22.932317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.234 [2024-10-07 07:49:22.932333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-10-07 07:49:22.942244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.234 [2024-10-07 07:49:22.942317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.234 [2024-10-07 07:49:22.942331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.234 [2024-10-07 07:49:22.942338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.234 [2024-10-07 07:49:22.942344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.234 [2024-10-07 07:49:22.942359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-10-07 07:49:22.952274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.234 [2024-10-07 07:49:22.952346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.234 [2024-10-07 07:49:22.952360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.234 [2024-10-07 07:49:22.952367] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.234 [2024-10-07 07:49:22.952373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.234 [2024-10-07 07:49:22.952389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-10-07 07:49:22.962402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.234 [2024-10-07 07:49:22.962470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.234 [2024-10-07 07:49:22.962489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.234 [2024-10-07 07:49:22.962496] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.234 [2024-10-07 07:49:22.962502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.234 [2024-10-07 07:49:22.962518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-10-07 07:49:22.972378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.234 [2024-10-07 07:49:22.972447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.234 [2024-10-07 07:49:22.972461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.234 [2024-10-07 07:49:22.972472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.234 [2024-10-07 07:49:22.972478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.234 [2024-10-07 07:49:22.972494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-10-07 07:49:22.982377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.234 [2024-10-07 07:49:22.982449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.234 [2024-10-07 07:49:22.982471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.234 [2024-10-07 07:49:22.982478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.234 [2024-10-07 07:49:22.982484] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.234 [2024-10-07 07:49:22.982498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-10-07 07:49:22.992426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.234 [2024-10-07 07:49:22.992523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.234 [2024-10-07 07:49:22.992539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.234 [2024-10-07 07:49:22.992546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.234 [2024-10-07 07:49:22.992552] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.234 [2024-10-07 07:49:22.992568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-10-07 07:49:23.002398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.234 [2024-10-07 07:49:23.002473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.234 [2024-10-07 07:49:23.002520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.234 [2024-10-07 07:49:23.002528] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.234 [2024-10-07 07:49:23.002537] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.234 [2024-10-07 07:49:23.002562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-10-07 07:49:23.012447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.234 [2024-10-07 07:49:23.012533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.234 [2024-10-07 07:49:23.012548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.234 [2024-10-07 07:49:23.012555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.234 [2024-10-07 07:49:23.012561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.234 [2024-10-07 07:49:23.012577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-10-07 07:49:23.022412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.234 [2024-10-07 07:49:23.022485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.234 [2024-10-07 07:49:23.022500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.234 [2024-10-07 07:49:23.022507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.234 [2024-10-07 07:49:23.022513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.234 [2024-10-07 07:49:23.022528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-10-07 07:49:23.032427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.234 [2024-10-07 07:49:23.032499] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.234 [2024-10-07 07:49:23.032514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.234 [2024-10-07 07:49:23.032521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.234 [2024-10-07 07:49:23.032526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.234 [2024-10-07 07:49:23.032542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-10-07 07:49:23.042460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.234 [2024-10-07 07:49:23.042526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.234 [2024-10-07 07:49:23.042541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.234 [2024-10-07 07:49:23.042547] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.234 [2024-10-07 07:49:23.042553] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.234 [2024-10-07 07:49:23.042568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-10-07 07:49:23.052546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.234 [2024-10-07 07:49:23.052623] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.234 [2024-10-07 07:49:23.052638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.234 [2024-10-07 07:49:23.052645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.234 [2024-10-07 07:49:23.052651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.234 [2024-10-07 07:49:23.052666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-10-07 07:49:23.062603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.234 [2024-10-07 07:49:23.062670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.234 [2024-10-07 07:49:23.062686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.234 [2024-10-07 07:49:23.062693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.234 [2024-10-07 07:49:23.062699] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.234 [2024-10-07 07:49:23.062714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.234 qpair failed and we were unable to recover it. 00:30:19.234 [2024-10-07 07:49:23.072622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.235 [2024-10-07 07:49:23.072690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.235 [2024-10-07 07:49:23.072705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.235 [2024-10-07 07:49:23.072712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.235 [2024-10-07 07:49:23.072717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.235 [2024-10-07 07:49:23.072732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-10-07 07:49:23.082699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.235 [2024-10-07 07:49:23.082786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.235 [2024-10-07 07:49:23.082801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.235 [2024-10-07 07:49:23.082808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.235 [2024-10-07 07:49:23.082814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.235 [2024-10-07 07:49:23.082830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-10-07 07:49:23.092648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.235 [2024-10-07 07:49:23.092721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.235 [2024-10-07 07:49:23.092736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.235 [2024-10-07 07:49:23.092743] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.235 [2024-10-07 07:49:23.092752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.235 [2024-10-07 07:49:23.092768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-10-07 07:49:23.102625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.235 [2024-10-07 07:49:23.102694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.235 [2024-10-07 07:49:23.102709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.235 [2024-10-07 07:49:23.102715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.235 [2024-10-07 07:49:23.102722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.235 [2024-10-07 07:49:23.102738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-10-07 07:49:23.112731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.235 [2024-10-07 07:49:23.112802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.235 [2024-10-07 07:49:23.112816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.235 [2024-10-07 07:49:23.112823] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.235 [2024-10-07 07:49:23.112829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.235 [2024-10-07 07:49:23.112844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-10-07 07:49:23.122705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.235 [2024-10-07 07:49:23.122777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.235 [2024-10-07 07:49:23.122792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.235 [2024-10-07 07:49:23.122799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.235 [2024-10-07 07:49:23.122805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.235 [2024-10-07 07:49:23.122823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-10-07 07:49:23.132784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.235 [2024-10-07 07:49:23.132852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.235 [2024-10-07 07:49:23.132865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.235 [2024-10-07 07:49:23.132872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.235 [2024-10-07 07:49:23.132878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.235 [2024-10-07 07:49:23.132893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-10-07 07:49:23.142802] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.235 [2024-10-07 07:49:23.142869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.235 [2024-10-07 07:49:23.142884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.235 [2024-10-07 07:49:23.142891] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.235 [2024-10-07 07:49:23.142896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.235 [2024-10-07 07:49:23.142911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-10-07 07:49:23.152878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.235 [2024-10-07 07:49:23.152984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.235 [2024-10-07 07:49:23.152999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.235 [2024-10-07 07:49:23.153006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.235 [2024-10-07 07:49:23.153012] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.235 [2024-10-07 07:49:23.153027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-10-07 07:49:23.162872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.235 [2024-10-07 07:49:23.162949] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.235 [2024-10-07 07:49:23.162964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.235 [2024-10-07 07:49:23.162971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.235 [2024-10-07 07:49:23.162977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.235 [2024-10-07 07:49:23.162993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-10-07 07:49:23.172893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.235 [2024-10-07 07:49:23.172971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.235 [2024-10-07 07:49:23.172986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.235 [2024-10-07 07:49:23.172992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.235 [2024-10-07 07:49:23.172999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.235 [2024-10-07 07:49:23.173014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-10-07 07:49:23.182926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.235 [2024-10-07 07:49:23.183004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.235 [2024-10-07 07:49:23.183017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.235 [2024-10-07 07:49:23.183027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.235 [2024-10-07 07:49:23.183033] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.235 [2024-10-07 07:49:23.183049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.235 [2024-10-07 07:49:23.192913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.235 [2024-10-07 07:49:23.192988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.235 [2024-10-07 07:49:23.193001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.235 [2024-10-07 07:49:23.193008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.235 [2024-10-07 07:49:23.193014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.235 [2024-10-07 07:49:23.193029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.235 qpair failed and we were unable to recover it. 00:30:19.497 [2024-10-07 07:49:23.202987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.497 [2024-10-07 07:49:23.203064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.497 [2024-10-07 07:49:23.203078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.497 [2024-10-07 07:49:23.203085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.497 [2024-10-07 07:49:23.203091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.497 [2024-10-07 07:49:23.203107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-10-07 07:49:23.213004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.497 [2024-10-07 07:49:23.213078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.497 [2024-10-07 07:49:23.213092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.497 [2024-10-07 07:49:23.213099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.497 [2024-10-07 07:49:23.213105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.497 [2024-10-07 07:49:23.213120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-10-07 07:49:23.222994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.497 [2024-10-07 07:49:23.223066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.497 [2024-10-07 07:49:23.223079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.497 [2024-10-07 07:49:23.223086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.497 [2024-10-07 07:49:23.223092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.497 [2024-10-07 07:49:23.223107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-10-07 07:49:23.233053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.497 [2024-10-07 07:49:23.233145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.497 [2024-10-07 07:49:23.233161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.497 [2024-10-07 07:49:23.233167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.497 [2024-10-07 07:49:23.233173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.497 [2024-10-07 07:49:23.233189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-10-07 07:49:23.243165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.497 [2024-10-07 07:49:23.243240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.497 [2024-10-07 07:49:23.243254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.497 [2024-10-07 07:49:23.243261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.497 [2024-10-07 07:49:23.243267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.497 [2024-10-07 07:49:23.243283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-10-07 07:49:23.253143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.497 [2024-10-07 07:49:23.253214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.497 [2024-10-07 07:49:23.253228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.497 [2024-10-07 07:49:23.253235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.497 [2024-10-07 07:49:23.253241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.497 [2024-10-07 07:49:23.253256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-10-07 07:49:23.263166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.497 [2024-10-07 07:49:23.263233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.497 [2024-10-07 07:49:23.263248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.497 [2024-10-07 07:49:23.263254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.497 [2024-10-07 07:49:23.263261] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.497 [2024-10-07 07:49:23.263276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-10-07 07:49:23.273209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.497 [2024-10-07 07:49:23.273276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.497 [2024-10-07 07:49:23.273294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.497 [2024-10-07 07:49:23.273301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.497 [2024-10-07 07:49:23.273307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.497 [2024-10-07 07:49:23.273322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-10-07 07:49:23.283215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.497 [2024-10-07 07:49:23.283286] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.497 [2024-10-07 07:49:23.283300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.497 [2024-10-07 07:49:23.283307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.497 [2024-10-07 07:49:23.283313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.497 [2024-10-07 07:49:23.283328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.497 qpair failed and we were unable to recover it. 00:30:19.497 [2024-10-07 07:49:23.293255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.497 [2024-10-07 07:49:23.293326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.497 [2024-10-07 07:49:23.293340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.497 [2024-10-07 07:49:23.293346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.497 [2024-10-07 07:49:23.293352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.498 [2024-10-07 07:49:23.293368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-10-07 07:49:23.303308] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.498 [2024-10-07 07:49:23.303411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.498 [2024-10-07 07:49:23.303426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.498 [2024-10-07 07:49:23.303432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.498 [2024-10-07 07:49:23.303438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.498 [2024-10-07 07:49:23.303455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-10-07 07:49:23.313319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.498 [2024-10-07 07:49:23.313385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.498 [2024-10-07 07:49:23.313399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.498 [2024-10-07 07:49:23.313406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.498 [2024-10-07 07:49:23.313412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.498 [2024-10-07 07:49:23.313427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-10-07 07:49:23.323353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.498 [2024-10-07 07:49:23.323417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.498 [2024-10-07 07:49:23.323431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.498 [2024-10-07 07:49:23.323438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.498 [2024-10-07 07:49:23.323443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.498 [2024-10-07 07:49:23.323458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-10-07 07:49:23.333377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.498 [2024-10-07 07:49:23.333445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.498 [2024-10-07 07:49:23.333459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.498 [2024-10-07 07:49:23.333466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.498 [2024-10-07 07:49:23.333472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.498 [2024-10-07 07:49:23.333486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-10-07 07:49:23.343397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.498 [2024-10-07 07:49:23.343472] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.498 [2024-10-07 07:49:23.343486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.498 [2024-10-07 07:49:23.343493] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.498 [2024-10-07 07:49:23.343499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.498 [2024-10-07 07:49:23.343514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-10-07 07:49:23.353474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.498 [2024-10-07 07:49:23.353549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.498 [2024-10-07 07:49:23.353563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.498 [2024-10-07 07:49:23.353570] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.498 [2024-10-07 07:49:23.353576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.498 [2024-10-07 07:49:23.353592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-10-07 07:49:23.363445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.498 [2024-10-07 07:49:23.363517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.498 [2024-10-07 07:49:23.363534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.498 [2024-10-07 07:49:23.363542] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.498 [2024-10-07 07:49:23.363551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.498 [2024-10-07 07:49:23.363566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-10-07 07:49:23.373495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.498 [2024-10-07 07:49:23.373565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.498 [2024-10-07 07:49:23.373579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.498 [2024-10-07 07:49:23.373586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.498 [2024-10-07 07:49:23.373592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.498 [2024-10-07 07:49:23.373607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-10-07 07:49:23.383551] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.498 [2024-10-07 07:49:23.383624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.498 [2024-10-07 07:49:23.383638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.498 [2024-10-07 07:49:23.383645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.498 [2024-10-07 07:49:23.383651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.498 [2024-10-07 07:49:23.383666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-10-07 07:49:23.393539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.498 [2024-10-07 07:49:23.393611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.498 [2024-10-07 07:49:23.393625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.498 [2024-10-07 07:49:23.393631] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.498 [2024-10-07 07:49:23.393637] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.498 [2024-10-07 07:49:23.393652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.498 qpair failed and we were unable to recover it. 00:30:19.498 [2024-10-07 07:49:23.403576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.498 [2024-10-07 07:49:23.403642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.498 [2024-10-07 07:49:23.403656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-10-07 07:49:23.403663] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-10-07 07:49:23.403668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.499 [2024-10-07 07:49:23.403687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-10-07 07:49:23.413642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-10-07 07:49:23.413752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-10-07 07:49:23.413767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-10-07 07:49:23.413774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-10-07 07:49:23.413779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.499 [2024-10-07 07:49:23.413795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-10-07 07:49:23.423629] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-10-07 07:49:23.423699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-10-07 07:49:23.423712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-10-07 07:49:23.423719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-10-07 07:49:23.423725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.499 [2024-10-07 07:49:23.423739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-10-07 07:49:23.433658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-10-07 07:49:23.433728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-10-07 07:49:23.433742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-10-07 07:49:23.433748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-10-07 07:49:23.433754] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.499 [2024-10-07 07:49:23.433768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-10-07 07:49:23.443758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-10-07 07:49:23.443828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-10-07 07:49:23.443842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-10-07 07:49:23.443848] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-10-07 07:49:23.443854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.499 [2024-10-07 07:49:23.443869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-10-07 07:49:23.453728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-10-07 07:49:23.453791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-10-07 07:49:23.453808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-10-07 07:49:23.453815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-10-07 07:49:23.453821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.499 [2024-10-07 07:49:23.453836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-10-07 07:49:23.463773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-10-07 07:49:23.463848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-10-07 07:49:23.463862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-10-07 07:49:23.463870] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-10-07 07:49:23.463876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.499 [2024-10-07 07:49:23.463891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.761 [2024-10-07 07:49:23.473803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.761 [2024-10-07 07:49:23.473875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.761 [2024-10-07 07:49:23.473889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.761 [2024-10-07 07:49:23.473897] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.761 [2024-10-07 07:49:23.473903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.761 [2024-10-07 07:49:23.473919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.761 qpair failed and we were unable to recover it. 00:30:19.761 [2024-10-07 07:49:23.483817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.761 [2024-10-07 07:49:23.483921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.761 [2024-10-07 07:49:23.483937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.761 [2024-10-07 07:49:23.483944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.761 [2024-10-07 07:49:23.483950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.761 [2024-10-07 07:49:23.483965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.761 qpair failed and we were unable to recover it. 00:30:19.761 [2024-10-07 07:49:23.493850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.761 [2024-10-07 07:49:23.493964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.761 [2024-10-07 07:49:23.493979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.761 [2024-10-07 07:49:23.493986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.761 [2024-10-07 07:49:23.493996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.761 [2024-10-07 07:49:23.494012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.761 qpair failed and we were unable to recover it. 00:30:19.761 [2024-10-07 07:49:23.503844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.761 [2024-10-07 07:49:23.503915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.761 [2024-10-07 07:49:23.503931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.761 [2024-10-07 07:49:23.503938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.761 [2024-10-07 07:49:23.503944] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.761 [2024-10-07 07:49:23.503959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.761 qpair failed and we were unable to recover it. 00:30:19.761 [2024-10-07 07:49:23.513867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.761 [2024-10-07 07:49:23.513934] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.761 [2024-10-07 07:49:23.513948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.761 [2024-10-07 07:49:23.513954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.761 [2024-10-07 07:49:23.513960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.761 [2024-10-07 07:49:23.513975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.761 qpair failed and we were unable to recover it. 00:30:19.761 [2024-10-07 07:49:23.523953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.761 [2024-10-07 07:49:23.524029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.761 [2024-10-07 07:49:23.524044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.761 [2024-10-07 07:49:23.524051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.761 [2024-10-07 07:49:23.524063] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.761 [2024-10-07 07:49:23.524079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.761 qpair failed and we were unable to recover it. 00:30:19.761 [2024-10-07 07:49:23.533959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.761 [2024-10-07 07:49:23.534025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.761 [2024-10-07 07:49:23.534039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.761 [2024-10-07 07:49:23.534046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.761 [2024-10-07 07:49:23.534052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.761 [2024-10-07 07:49:23.534071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.761 qpair failed and we were unable to recover it. 00:30:19.761 [2024-10-07 07:49:23.544000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.761 [2024-10-07 07:49:23.544075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.761 [2024-10-07 07:49:23.544089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.761 [2024-10-07 07:49:23.544096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.761 [2024-10-07 07:49:23.544101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.761 [2024-10-07 07:49:23.544117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.761 qpair failed and we were unable to recover it. 00:30:19.761 [2024-10-07 07:49:23.554008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.761 [2024-10-07 07:49:23.554079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.761 [2024-10-07 07:49:23.554093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.761 [2024-10-07 07:49:23.554099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.762 [2024-10-07 07:49:23.554105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.762 [2024-10-07 07:49:23.554120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.762 qpair failed and we were unable to recover it. 00:30:19.762 [2024-10-07 07:49:23.564049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.762 [2024-10-07 07:49:23.564122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.762 [2024-10-07 07:49:23.564136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.762 [2024-10-07 07:49:23.564143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.762 [2024-10-07 07:49:23.564149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.762 [2024-10-07 07:49:23.564168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.762 qpair failed and we were unable to recover it. 00:30:19.762 [2024-10-07 07:49:23.574070] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.762 [2024-10-07 07:49:23.574147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.762 [2024-10-07 07:49:23.574161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.762 [2024-10-07 07:49:23.574168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.762 [2024-10-07 07:49:23.574175] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.762 [2024-10-07 07:49:23.574190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.762 qpair failed and we were unable to recover it. 00:30:19.762 [2024-10-07 07:49:23.584115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.762 [2024-10-07 07:49:23.584188] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.762 [2024-10-07 07:49:23.584202] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.762 [2024-10-07 07:49:23.584208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.762 [2024-10-07 07:49:23.584217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.762 [2024-10-07 07:49:23.584233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.762 qpair failed and we were unable to recover it. 00:30:19.762 [2024-10-07 07:49:23.594128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.762 [2024-10-07 07:49:23.594192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.762 [2024-10-07 07:49:23.594207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.762 [2024-10-07 07:49:23.594214] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.762 [2024-10-07 07:49:23.594220] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.762 [2024-10-07 07:49:23.594235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.762 qpair failed and we were unable to recover it. 00:30:19.762 [2024-10-07 07:49:23.604223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.762 [2024-10-07 07:49:23.604317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.762 [2024-10-07 07:49:23.604332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.762 [2024-10-07 07:49:23.604339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.762 [2024-10-07 07:49:23.604345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.762 [2024-10-07 07:49:23.604361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.762 qpair failed and we were unable to recover it. 00:30:19.762 [2024-10-07 07:49:23.614191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.762 [2024-10-07 07:49:23.614260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.762 [2024-10-07 07:49:23.614274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.762 [2024-10-07 07:49:23.614280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.762 [2024-10-07 07:49:23.614286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.762 [2024-10-07 07:49:23.614302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.762 qpair failed and we were unable to recover it. 00:30:19.762 [2024-10-07 07:49:23.624270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.762 [2024-10-07 07:49:23.624380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.762 [2024-10-07 07:49:23.624395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.762 [2024-10-07 07:49:23.624402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.762 [2024-10-07 07:49:23.624408] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.762 [2024-10-07 07:49:23.624424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.762 qpair failed and we were unable to recover it. 00:30:19.762 [2024-10-07 07:49:23.634224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.762 [2024-10-07 07:49:23.634297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.762 [2024-10-07 07:49:23.634311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.762 [2024-10-07 07:49:23.634317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.762 [2024-10-07 07:49:23.634323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.762 [2024-10-07 07:49:23.634338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.762 qpair failed and we were unable to recover it. 00:30:19.762 [2024-10-07 07:49:23.644284] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.762 [2024-10-07 07:49:23.644361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.762 [2024-10-07 07:49:23.644378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.762 [2024-10-07 07:49:23.644386] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.762 [2024-10-07 07:49:23.644391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.762 [2024-10-07 07:49:23.644407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.762 qpair failed and we were unable to recover it. 00:30:19.762 [2024-10-07 07:49:23.654301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.762 [2024-10-07 07:49:23.654420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.762 [2024-10-07 07:49:23.654435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.762 [2024-10-07 07:49:23.654442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.762 [2024-10-07 07:49:23.654448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.762 [2024-10-07 07:49:23.654464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.762 qpair failed and we were unable to recover it. 00:30:19.762 [2024-10-07 07:49:23.664337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.762 [2024-10-07 07:49:23.664411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.763 [2024-10-07 07:49:23.664426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.763 [2024-10-07 07:49:23.664433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.763 [2024-10-07 07:49:23.664439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.763 [2024-10-07 07:49:23.664454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.763 qpair failed and we were unable to recover it. 00:30:19.763 [2024-10-07 07:49:23.674403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.763 [2024-10-07 07:49:23.674512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.763 [2024-10-07 07:49:23.674527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.763 [2024-10-07 07:49:23.674537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.763 [2024-10-07 07:49:23.674543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.763 [2024-10-07 07:49:23.674558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.763 qpair failed and we were unable to recover it. 00:30:19.763 [2024-10-07 07:49:23.684381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.763 [2024-10-07 07:49:23.684455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.763 [2024-10-07 07:49:23.684469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.763 [2024-10-07 07:49:23.684476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.763 [2024-10-07 07:49:23.684482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.763 [2024-10-07 07:49:23.684499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.763 qpair failed and we were unable to recover it. 00:30:19.763 [2024-10-07 07:49:23.694421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.763 [2024-10-07 07:49:23.694497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.763 [2024-10-07 07:49:23.694512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.763 [2024-10-07 07:49:23.694522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.763 [2024-10-07 07:49:23.694528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.763 [2024-10-07 07:49:23.694543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.763 qpair failed and we were unable to recover it. 00:30:19.763 [2024-10-07 07:49:23.704450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.763 [2024-10-07 07:49:23.704518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.763 [2024-10-07 07:49:23.704535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.763 [2024-10-07 07:49:23.704541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.763 [2024-10-07 07:49:23.704547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.763 [2024-10-07 07:49:23.704562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.763 qpair failed and we were unable to recover it. 00:30:19.763 [2024-10-07 07:49:23.714479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.763 [2024-10-07 07:49:23.714554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.763 [2024-10-07 07:49:23.714568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.763 [2024-10-07 07:49:23.714575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.763 [2024-10-07 07:49:23.714581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.763 [2024-10-07 07:49:23.714599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.763 qpair failed and we were unable to recover it. 00:30:19.763 [2024-10-07 07:49:23.724534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.763 [2024-10-07 07:49:23.724603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.763 [2024-10-07 07:49:23.724617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.763 [2024-10-07 07:49:23.724623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.763 [2024-10-07 07:49:23.724629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:19.763 [2024-10-07 07:49:23.724645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.763 qpair failed and we were unable to recover it. 00:30:20.025 [2024-10-07 07:49:23.734557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.025 [2024-10-07 07:49:23.734631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.025 [2024-10-07 07:49:23.734645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.025 [2024-10-07 07:49:23.734651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.025 [2024-10-07 07:49:23.734657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.025 [2024-10-07 07:49:23.734675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.025 qpair failed and we were unable to recover it. 00:30:20.025 [2024-10-07 07:49:23.744502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.025 [2024-10-07 07:49:23.744572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.025 [2024-10-07 07:49:23.744586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.025 [2024-10-07 07:49:23.744592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.025 [2024-10-07 07:49:23.744598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.025 [2024-10-07 07:49:23.744613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.025 qpair failed and we were unable to recover it. 00:30:20.025 [2024-10-07 07:49:23.754610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.025 [2024-10-07 07:49:23.754681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.025 [2024-10-07 07:49:23.754695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.025 [2024-10-07 07:49:23.754701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.025 [2024-10-07 07:49:23.754707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.025 [2024-10-07 07:49:23.754722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.025 qpair failed and we were unable to recover it. 00:30:20.025 [2024-10-07 07:49:23.764643] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.025 [2024-10-07 07:49:23.764712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.025 [2024-10-07 07:49:23.764726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.025 [2024-10-07 07:49:23.764736] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.025 [2024-10-07 07:49:23.764742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.025 [2024-10-07 07:49:23.764757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.025 qpair failed and we were unable to recover it. 00:30:20.025 [2024-10-07 07:49:23.774659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.025 [2024-10-07 07:49:23.774783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.025 [2024-10-07 07:49:23.774798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.025 [2024-10-07 07:49:23.774805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.025 [2024-10-07 07:49:23.774811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.025 [2024-10-07 07:49:23.774828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.025 qpair failed and we were unable to recover it. 00:30:20.025 [2024-10-07 07:49:23.784787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.025 [2024-10-07 07:49:23.784905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.025 [2024-10-07 07:49:23.784920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.025 [2024-10-07 07:49:23.784927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.025 [2024-10-07 07:49:23.784933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.025 [2024-10-07 07:49:23.784948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.025 qpair failed and we were unable to recover it. 00:30:20.026 [2024-10-07 07:49:23.794783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-10-07 07:49:23.794856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-10-07 07:49:23.794870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-10-07 07:49:23.794877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-10-07 07:49:23.794884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.026 [2024-10-07 07:49:23.794899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-10-07 07:49:23.804696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-10-07 07:49:23.804763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-10-07 07:49:23.804778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-10-07 07:49:23.804785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-10-07 07:49:23.804792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.026 [2024-10-07 07:49:23.804807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-10-07 07:49:23.814816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-10-07 07:49:23.814885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-10-07 07:49:23.814899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-10-07 07:49:23.814906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-10-07 07:49:23.814912] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.026 [2024-10-07 07:49:23.814927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-10-07 07:49:23.824825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-10-07 07:49:23.824891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-10-07 07:49:23.824905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-10-07 07:49:23.824912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-10-07 07:49:23.824918] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.026 [2024-10-07 07:49:23.824933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-10-07 07:49:23.834878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-10-07 07:49:23.834988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-10-07 07:49:23.835004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-10-07 07:49:23.835011] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-10-07 07:49:23.835017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.026 [2024-10-07 07:49:23.835032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-10-07 07:49:23.844886] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-10-07 07:49:23.844959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-10-07 07:49:23.844973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-10-07 07:49:23.844980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-10-07 07:49:23.844986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.026 [2024-10-07 07:49:23.845001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-10-07 07:49:23.854910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-10-07 07:49:23.855022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-10-07 07:49:23.855040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-10-07 07:49:23.855047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-10-07 07:49:23.855053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.026 [2024-10-07 07:49:23.855073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-10-07 07:49:23.864933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-10-07 07:49:23.865001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-10-07 07:49:23.865015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-10-07 07:49:23.865021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-10-07 07:49:23.865027] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.026 [2024-10-07 07:49:23.865042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-10-07 07:49:23.874974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-10-07 07:49:23.875047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-10-07 07:49:23.875065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-10-07 07:49:23.875072] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-10-07 07:49:23.875078] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.026 [2024-10-07 07:49:23.875093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-10-07 07:49:23.885014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-10-07 07:49:23.885085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-10-07 07:49:23.885099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.026 [2024-10-07 07:49:23.885106] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.026 [2024-10-07 07:49:23.885112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.026 [2024-10-07 07:49:23.885127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.026 qpair failed and we were unable to recover it. 00:30:20.026 [2024-10-07 07:49:23.895079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.026 [2024-10-07 07:49:23.895182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.026 [2024-10-07 07:49:23.895197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-10-07 07:49:23.895203] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-10-07 07:49:23.895209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.027 [2024-10-07 07:49:23.895228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-10-07 07:49:23.905069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-10-07 07:49:23.905137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-10-07 07:49:23.905151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-10-07 07:49:23.905158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-10-07 07:49:23.905163] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.027 [2024-10-07 07:49:23.905178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-10-07 07:49:23.915095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-10-07 07:49:23.915167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-10-07 07:49:23.915182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-10-07 07:49:23.915188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-10-07 07:49:23.915194] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.027 [2024-10-07 07:49:23.915210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-10-07 07:49:23.925151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-10-07 07:49:23.925266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-10-07 07:49:23.925281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-10-07 07:49:23.925287] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-10-07 07:49:23.925293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.027 [2024-10-07 07:49:23.925309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-10-07 07:49:23.935152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-10-07 07:49:23.935224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-10-07 07:49:23.935238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-10-07 07:49:23.935245] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-10-07 07:49:23.935251] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.027 [2024-10-07 07:49:23.935267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-10-07 07:49:23.945112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-10-07 07:49:23.945185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-10-07 07:49:23.945205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-10-07 07:49:23.945212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-10-07 07:49:23.945218] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.027 [2024-10-07 07:49:23.945234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-10-07 07:49:23.955188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-10-07 07:49:23.955260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-10-07 07:49:23.955275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-10-07 07:49:23.955281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-10-07 07:49:23.955288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.027 [2024-10-07 07:49:23.955303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-10-07 07:49:23.965244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-10-07 07:49:23.965363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-10-07 07:49:23.965379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-10-07 07:49:23.965385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-10-07 07:49:23.965391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.027 [2024-10-07 07:49:23.965408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-10-07 07:49:23.975253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-10-07 07:49:23.975334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-10-07 07:49:23.975349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-10-07 07:49:23.975356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-10-07 07:49:23.975362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.027 [2024-10-07 07:49:23.975378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.027 [2024-10-07 07:49:23.985286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.027 [2024-10-07 07:49:23.985408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.027 [2024-10-07 07:49:23.985423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.027 [2024-10-07 07:49:23.985430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.027 [2024-10-07 07:49:23.985435] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.027 [2024-10-07 07:49:23.985454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.027 qpair failed and we were unable to recover it. 00:30:20.289 [2024-10-07 07:49:23.995313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.289 [2024-10-07 07:49:23.995380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.289 [2024-10-07 07:49:23.995394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.289 [2024-10-07 07:49:23.995400] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.289 [2024-10-07 07:49:23.995406] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.289 [2024-10-07 07:49:23.995421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.289 qpair failed and we were unable to recover it. 00:30:20.289 [2024-10-07 07:49:24.005331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.289 [2024-10-07 07:49:24.005433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.289 [2024-10-07 07:49:24.005448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.289 [2024-10-07 07:49:24.005455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.289 [2024-10-07 07:49:24.005461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.290 [2024-10-07 07:49:24.005476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.290 qpair failed and we were unable to recover it. 00:30:20.290 [2024-10-07 07:49:24.015335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.290 [2024-10-07 07:49:24.015402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.290 [2024-10-07 07:49:24.015416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.290 [2024-10-07 07:49:24.015423] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.290 [2024-10-07 07:49:24.015429] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.290 [2024-10-07 07:49:24.015444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.290 qpair failed and we were unable to recover it. 00:30:20.290 [2024-10-07 07:49:24.025395] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.290 [2024-10-07 07:49:24.025462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.290 [2024-10-07 07:49:24.025476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.290 [2024-10-07 07:49:24.025482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.290 [2024-10-07 07:49:24.025488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.290 [2024-10-07 07:49:24.025503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.290 qpair failed and we were unable to recover it. 00:30:20.290 [2024-10-07 07:49:24.035436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.290 [2024-10-07 07:49:24.035512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.290 [2024-10-07 07:49:24.035526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.290 [2024-10-07 07:49:24.035532] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.290 [2024-10-07 07:49:24.035538] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.290 [2024-10-07 07:49:24.035553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.290 qpair failed and we were unable to recover it. 00:30:20.290 [2024-10-07 07:49:24.045486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.290 [2024-10-07 07:49:24.045558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.290 [2024-10-07 07:49:24.045572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.290 [2024-10-07 07:49:24.045578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.290 [2024-10-07 07:49:24.045584] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.290 [2024-10-07 07:49:24.045599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.290 qpair failed and we were unable to recover it. 00:30:20.290 [2024-10-07 07:49:24.055535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.290 [2024-10-07 07:49:24.055829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.290 [2024-10-07 07:49:24.055845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.290 [2024-10-07 07:49:24.055851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.290 [2024-10-07 07:49:24.055857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.290 [2024-10-07 07:49:24.055874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.290 qpair failed and we were unable to recover it. 00:30:20.290 [2024-10-07 07:49:24.065574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.290 [2024-10-07 07:49:24.065641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.290 [2024-10-07 07:49:24.065656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.290 [2024-10-07 07:49:24.065662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.290 [2024-10-07 07:49:24.065669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.290 [2024-10-07 07:49:24.065684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.290 qpair failed and we were unable to recover it. 00:30:20.290 [2024-10-07 07:49:24.075592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.290 [2024-10-07 07:49:24.075691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.290 [2024-10-07 07:49:24.075706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.290 [2024-10-07 07:49:24.075712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.290 [2024-10-07 07:49:24.075722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.290 [2024-10-07 07:49:24.075737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.290 qpair failed and we were unable to recover it. 00:30:20.290 [2024-10-07 07:49:24.085626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.290 [2024-10-07 07:49:24.085740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.290 [2024-10-07 07:49:24.085756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.290 [2024-10-07 07:49:24.085762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.290 [2024-10-07 07:49:24.085769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.290 [2024-10-07 07:49:24.085785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.290 qpair failed and we were unable to recover it. 00:30:20.290 [2024-10-07 07:49:24.095618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.290 [2024-10-07 07:49:24.095683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.290 [2024-10-07 07:49:24.095698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.290 [2024-10-07 07:49:24.095704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.290 [2024-10-07 07:49:24.095710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.290 [2024-10-07 07:49:24.095725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.290 qpair failed and we were unable to recover it. 00:30:20.290 [2024-10-07 07:49:24.105635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.290 [2024-10-07 07:49:24.105704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.290 [2024-10-07 07:49:24.105718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.290 [2024-10-07 07:49:24.105725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.290 [2024-10-07 07:49:24.105731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.290 [2024-10-07 07:49:24.105746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.290 qpair failed and we were unable to recover it. 00:30:20.290 [2024-10-07 07:49:24.115667] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.290 [2024-10-07 07:49:24.115783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.291 [2024-10-07 07:49:24.115798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.291 [2024-10-07 07:49:24.115805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.291 [2024-10-07 07:49:24.115811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.291 [2024-10-07 07:49:24.115827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.291 qpair failed and we were unable to recover it. 00:30:20.291 [2024-10-07 07:49:24.125678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.291 [2024-10-07 07:49:24.125755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.291 [2024-10-07 07:49:24.125769] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.291 [2024-10-07 07:49:24.125776] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.291 [2024-10-07 07:49:24.125782] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.291 [2024-10-07 07:49:24.125797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.291 qpair failed and we were unable to recover it. 00:30:20.291 [2024-10-07 07:49:24.135717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.291 [2024-10-07 07:49:24.135782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.291 [2024-10-07 07:49:24.135797] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.291 [2024-10-07 07:49:24.135804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.291 [2024-10-07 07:49:24.135809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.291 [2024-10-07 07:49:24.135825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.291 qpair failed and we were unable to recover it. 00:30:20.291 [2024-10-07 07:49:24.145761] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.291 [2024-10-07 07:49:24.145832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.291 [2024-10-07 07:49:24.145847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.291 [2024-10-07 07:49:24.145854] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.291 [2024-10-07 07:49:24.145860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.291 [2024-10-07 07:49:24.145875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.291 qpair failed and we were unable to recover it. 00:30:20.291 [2024-10-07 07:49:24.155758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.291 [2024-10-07 07:49:24.155839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.291 [2024-10-07 07:49:24.155853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.291 [2024-10-07 07:49:24.155860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.291 [2024-10-07 07:49:24.155866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.291 [2024-10-07 07:49:24.155881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.291 qpair failed and we were unable to recover it. 00:30:20.291 [2024-10-07 07:49:24.165764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.291 [2024-10-07 07:49:24.165832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.291 [2024-10-07 07:49:24.165847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.291 [2024-10-07 07:49:24.165857] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.291 [2024-10-07 07:49:24.165863] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.291 [2024-10-07 07:49:24.165878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.291 qpair failed and we were unable to recover it. 00:30:20.291 [2024-10-07 07:49:24.175778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.291 [2024-10-07 07:49:24.175849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.291 [2024-10-07 07:49:24.175863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.291 [2024-10-07 07:49:24.175869] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.291 [2024-10-07 07:49:24.175875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.291 [2024-10-07 07:49:24.175890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.291 qpair failed and we were unable to recover it. 00:30:20.291 [2024-10-07 07:49:24.185876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.291 [2024-10-07 07:49:24.185945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.291 [2024-10-07 07:49:24.185960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.291 [2024-10-07 07:49:24.185967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.291 [2024-10-07 07:49:24.185973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.291 [2024-10-07 07:49:24.185988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.291 qpair failed and we were unable to recover it. 00:30:20.291 [2024-10-07 07:49:24.195855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.291 [2024-10-07 07:49:24.195930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.291 [2024-10-07 07:49:24.195944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.291 [2024-10-07 07:49:24.195950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.291 [2024-10-07 07:49:24.195956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.291 [2024-10-07 07:49:24.195971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.291 qpair failed and we were unable to recover it. 00:30:20.291 [2024-10-07 07:49:24.205869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.291 [2024-10-07 07:49:24.205939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.291 [2024-10-07 07:49:24.205953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.291 [2024-10-07 07:49:24.205960] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.291 [2024-10-07 07:49:24.205966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.291 [2024-10-07 07:49:24.205982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.291 qpair failed and we were unable to recover it. 00:30:20.291 [2024-10-07 07:49:24.215969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.291 [2024-10-07 07:49:24.216044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.291 [2024-10-07 07:49:24.216061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.291 [2024-10-07 07:49:24.216069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.291 [2024-10-07 07:49:24.216075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.291 [2024-10-07 07:49:24.216090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.291 qpair failed and we were unable to recover it. 00:30:20.291 [2024-10-07 07:49:24.225934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.292 [2024-10-07 07:49:24.226004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.292 [2024-10-07 07:49:24.226018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.292 [2024-10-07 07:49:24.226024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.292 [2024-10-07 07:49:24.226030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.292 [2024-10-07 07:49:24.226045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.292 qpair failed and we were unable to recover it. 00:30:20.292 [2024-10-07 07:49:24.235967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.292 [2024-10-07 07:49:24.236032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.292 [2024-10-07 07:49:24.236052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.292 [2024-10-07 07:49:24.236063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.292 [2024-10-07 07:49:24.236069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.292 [2024-10-07 07:49:24.236084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.292 qpair failed and we were unable to recover it. 00:30:20.292 [2024-10-07 07:49:24.246071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.292 [2024-10-07 07:49:24.246141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.292 [2024-10-07 07:49:24.246156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.292 [2024-10-07 07:49:24.246163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.292 [2024-10-07 07:49:24.246168] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.292 [2024-10-07 07:49:24.246184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.292 qpair failed and we were unable to recover it. 00:30:20.292 [2024-10-07 07:49:24.256110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.292 [2024-10-07 07:49:24.256171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.292 [2024-10-07 07:49:24.256193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.292 [2024-10-07 07:49:24.256203] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.292 [2024-10-07 07:49:24.256209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.292 [2024-10-07 07:49:24.256224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.292 qpair failed and we were unable to recover it. 00:30:20.553 [2024-10-07 07:49:24.266053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.553 [2024-10-07 07:49:24.266126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.553 [2024-10-07 07:49:24.266144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.553 [2024-10-07 07:49:24.266150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.553 [2024-10-07 07:49:24.266156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.553 [2024-10-07 07:49:24.266171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.553 qpair failed and we were unable to recover it. 00:30:20.553 [2024-10-07 07:49:24.276127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.553 [2024-10-07 07:49:24.276198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.553 [2024-10-07 07:49:24.276212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.553 [2024-10-07 07:49:24.276219] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.553 [2024-10-07 07:49:24.276225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.553 [2024-10-07 07:49:24.276240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.553 qpair failed and we were unable to recover it. 00:30:20.553 [2024-10-07 07:49:24.286361] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.553 [2024-10-07 07:49:24.286425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.553 [2024-10-07 07:49:24.286439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.553 [2024-10-07 07:49:24.286446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.553 [2024-10-07 07:49:24.286452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.553 [2024-10-07 07:49:24.286467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.553 qpair failed and we were unable to recover it. 00:30:20.553 [2024-10-07 07:49:24.296133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.553 [2024-10-07 07:49:24.296206] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.553 [2024-10-07 07:49:24.296220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.553 [2024-10-07 07:49:24.296226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.553 [2024-10-07 07:49:24.296232] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.553 [2024-10-07 07:49:24.296248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.553 qpair failed and we were unable to recover it. 00:30:20.553 [2024-10-07 07:49:24.306283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.553 [2024-10-07 07:49:24.306365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.553 [2024-10-07 07:49:24.306379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.553 [2024-10-07 07:49:24.306386] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.553 [2024-10-07 07:49:24.306394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.553 [2024-10-07 07:49:24.306409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.553 qpair failed and we were unable to recover it. 00:30:20.553 [2024-10-07 07:49:24.316288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.553 [2024-10-07 07:49:24.316371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.553 [2024-10-07 07:49:24.316386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.553 [2024-10-07 07:49:24.316393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.553 [2024-10-07 07:49:24.316399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.553 [2024-10-07 07:49:24.316414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.553 qpair failed and we were unable to recover it. 00:30:20.553 [2024-10-07 07:49:24.326221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.553 [2024-10-07 07:49:24.326294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.553 [2024-10-07 07:49:24.326308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.553 [2024-10-07 07:49:24.326315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.553 [2024-10-07 07:49:24.326321] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.553 [2024-10-07 07:49:24.326336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.554 qpair failed and we were unable to recover it. 00:30:20.554 [2024-10-07 07:49:24.336257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.554 [2024-10-07 07:49:24.336325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.554 [2024-10-07 07:49:24.336339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.554 [2024-10-07 07:49:24.336346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.554 [2024-10-07 07:49:24.336352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.554 [2024-10-07 07:49:24.336367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.554 qpair failed and we were unable to recover it. 00:30:20.554 [2024-10-07 07:49:24.346281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.554 [2024-10-07 07:49:24.346351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.554 [2024-10-07 07:49:24.346367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.554 [2024-10-07 07:49:24.346374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.554 [2024-10-07 07:49:24.346380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.554 [2024-10-07 07:49:24.346395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.554 qpair failed and we were unable to recover it. 00:30:20.554 [2024-10-07 07:49:24.356377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.554 [2024-10-07 07:49:24.356459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.554 [2024-10-07 07:49:24.356473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.554 [2024-10-07 07:49:24.356480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.554 [2024-10-07 07:49:24.356487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.554 [2024-10-07 07:49:24.356502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.554 qpair failed and we were unable to recover it. 00:30:20.554 [2024-10-07 07:49:24.366341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.554 [2024-10-07 07:49:24.366425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.554 [2024-10-07 07:49:24.366441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.554 [2024-10-07 07:49:24.366447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.554 [2024-10-07 07:49:24.366454] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.554 [2024-10-07 07:49:24.366470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.554 qpair failed and we were unable to recover it. 00:30:20.554 [2024-10-07 07:49:24.376442] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.554 [2024-10-07 07:49:24.376549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.554 [2024-10-07 07:49:24.376564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.554 [2024-10-07 07:49:24.376571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.554 [2024-10-07 07:49:24.376577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.554 [2024-10-07 07:49:24.376593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.554 qpair failed and we were unable to recover it. 00:30:20.554 [2024-10-07 07:49:24.386400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.554 [2024-10-07 07:49:24.386470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.554 [2024-10-07 07:49:24.386484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.554 [2024-10-07 07:49:24.386491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.554 [2024-10-07 07:49:24.386497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.554 [2024-10-07 07:49:24.386515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.554 qpair failed and we were unable to recover it. 00:30:20.554 [2024-10-07 07:49:24.396417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.554 [2024-10-07 07:49:24.396490] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.554 [2024-10-07 07:49:24.396504] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.554 [2024-10-07 07:49:24.396511] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.554 [2024-10-07 07:49:24.396517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.554 [2024-10-07 07:49:24.396531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.554 qpair failed and we were unable to recover it. 00:30:20.554 [2024-10-07 07:49:24.406584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.554 [2024-10-07 07:49:24.406647] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.554 [2024-10-07 07:49:24.406666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.554 [2024-10-07 07:49:24.406673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.554 [2024-10-07 07:49:24.406679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.554 [2024-10-07 07:49:24.406695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.554 qpair failed and we were unable to recover it. 00:30:20.554 [2024-10-07 07:49:24.416480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.554 [2024-10-07 07:49:24.416590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.554 [2024-10-07 07:49:24.416605] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.554 [2024-10-07 07:49:24.416612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.554 [2024-10-07 07:49:24.416618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.554 [2024-10-07 07:49:24.416635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.554 qpair failed and we were unable to recover it. 00:30:20.554 [2024-10-07 07:49:24.426578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.554 [2024-10-07 07:49:24.426648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.554 [2024-10-07 07:49:24.426662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.554 [2024-10-07 07:49:24.426669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.554 [2024-10-07 07:49:24.426674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.554 [2024-10-07 07:49:24.426690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.554 qpair failed and we were unable to recover it. 00:30:20.554 [2024-10-07 07:49:24.436674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.554 [2024-10-07 07:49:24.436749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.555 [2024-10-07 07:49:24.436766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.555 [2024-10-07 07:49:24.436773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.555 [2024-10-07 07:49:24.436780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.555 [2024-10-07 07:49:24.436795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.555 qpair failed and we were unable to recover it. 00:30:20.555 [2024-10-07 07:49:24.446640] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.555 [2024-10-07 07:49:24.446709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.555 [2024-10-07 07:49:24.446723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.555 [2024-10-07 07:49:24.446730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.555 [2024-10-07 07:49:24.446736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.555 [2024-10-07 07:49:24.446751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.555 qpair failed and we were unable to recover it. 00:30:20.555 [2024-10-07 07:49:24.456661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.555 [2024-10-07 07:49:24.456725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.555 [2024-10-07 07:49:24.456744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.555 [2024-10-07 07:49:24.456750] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.555 [2024-10-07 07:49:24.456756] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.555 [2024-10-07 07:49:24.456771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.555 qpair failed and we were unable to recover it. 00:30:20.555 [2024-10-07 07:49:24.466736] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.555 [2024-10-07 07:49:24.466815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.555 [2024-10-07 07:49:24.466830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.555 [2024-10-07 07:49:24.466836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.555 [2024-10-07 07:49:24.466845] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.555 [2024-10-07 07:49:24.466860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.555 qpair failed and we were unable to recover it. 00:30:20.555 [2024-10-07 07:49:24.476739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.555 [2024-10-07 07:49:24.476812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.555 [2024-10-07 07:49:24.476826] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.555 [2024-10-07 07:49:24.476833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.555 [2024-10-07 07:49:24.476839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.555 [2024-10-07 07:49:24.476857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.555 qpair failed and we were unable to recover it. 00:30:20.555 [2024-10-07 07:49:24.486831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.555 [2024-10-07 07:49:24.486921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.555 [2024-10-07 07:49:24.486936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.555 [2024-10-07 07:49:24.486943] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.555 [2024-10-07 07:49:24.486948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.555 [2024-10-07 07:49:24.486964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.555 qpair failed and we were unable to recover it. 00:30:20.555 [2024-10-07 07:49:24.496813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.555 [2024-10-07 07:49:24.496878] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.555 [2024-10-07 07:49:24.496895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.555 [2024-10-07 07:49:24.496902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.555 [2024-10-07 07:49:24.496907] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.555 [2024-10-07 07:49:24.496922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.555 qpair failed and we were unable to recover it. 00:30:20.555 [2024-10-07 07:49:24.506762] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.555 [2024-10-07 07:49:24.506845] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.555 [2024-10-07 07:49:24.506862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.555 [2024-10-07 07:49:24.506869] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.555 [2024-10-07 07:49:24.506875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.555 [2024-10-07 07:49:24.506891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.555 qpair failed and we were unable to recover it. 00:30:20.555 [2024-10-07 07:49:24.516778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.555 [2024-10-07 07:49:24.516845] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.555 [2024-10-07 07:49:24.516859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.555 [2024-10-07 07:49:24.516866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.555 [2024-10-07 07:49:24.516871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.555 [2024-10-07 07:49:24.516886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.555 qpair failed and we were unable to recover it. 00:30:20.817 [2024-10-07 07:49:24.526902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.817 [2024-10-07 07:49:24.526971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.817 [2024-10-07 07:49:24.526990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.817 [2024-10-07 07:49:24.526997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.817 [2024-10-07 07:49:24.527003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.817 [2024-10-07 07:49:24.527018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.817 qpair failed and we were unable to recover it. 00:30:20.817 [2024-10-07 07:49:24.536866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.817 [2024-10-07 07:49:24.536937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.817 [2024-10-07 07:49:24.536950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.817 [2024-10-07 07:49:24.536957] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.817 [2024-10-07 07:49:24.536963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.817 [2024-10-07 07:49:24.536978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.817 qpair failed and we were unable to recover it. 00:30:20.817 [2024-10-07 07:49:24.546940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.817 [2024-10-07 07:49:24.547012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.817 [2024-10-07 07:49:24.547026] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.817 [2024-10-07 07:49:24.547032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.817 [2024-10-07 07:49:24.547038] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.817 [2024-10-07 07:49:24.547053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.817 qpair failed and we were unable to recover it. 00:30:20.817 [2024-10-07 07:49:24.556981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.817 [2024-10-07 07:49:24.557094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.817 [2024-10-07 07:49:24.557111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.817 [2024-10-07 07:49:24.557118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.817 [2024-10-07 07:49:24.557123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.817 [2024-10-07 07:49:24.557138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.817 qpair failed and we were unable to recover it. 00:30:20.817 [2024-10-07 07:49:24.566930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.817 [2024-10-07 07:49:24.567006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.817 [2024-10-07 07:49:24.567021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.817 [2024-10-07 07:49:24.567027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.817 [2024-10-07 07:49:24.567037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.817 [2024-10-07 07:49:24.567052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.817 qpair failed and we were unable to recover it. 00:30:20.817 [2024-10-07 07:49:24.577044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.818 [2024-10-07 07:49:24.577114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.818 [2024-10-07 07:49:24.577128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.818 [2024-10-07 07:49:24.577135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.818 [2024-10-07 07:49:24.577140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.818 [2024-10-07 07:49:24.577155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.818 qpair failed and we were unable to recover it. 00:30:20.818 [2024-10-07 07:49:24.587052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.818 [2024-10-07 07:49:24.587124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.818 [2024-10-07 07:49:24.587138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.818 [2024-10-07 07:49:24.587144] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.818 [2024-10-07 07:49:24.587150] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.818 [2024-10-07 07:49:24.587165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.818 qpair failed and we were unable to recover it. 00:30:20.818 [2024-10-07 07:49:24.597061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.818 [2024-10-07 07:49:24.597129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.818 [2024-10-07 07:49:24.597143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.818 [2024-10-07 07:49:24.597149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.818 [2024-10-07 07:49:24.597155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.818 [2024-10-07 07:49:24.597170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.818 qpair failed and we were unable to recover it. 00:30:20.818 [2024-10-07 07:49:24.607126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.818 [2024-10-07 07:49:24.607225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.818 [2024-10-07 07:49:24.607238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.818 [2024-10-07 07:49:24.607244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.818 [2024-10-07 07:49:24.607250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.818 [2024-10-07 07:49:24.607265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.818 qpair failed and we were unable to recover it. 00:30:20.818 [2024-10-07 07:49:24.617128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.818 [2024-10-07 07:49:24.617243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.818 [2024-10-07 07:49:24.617259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.818 [2024-10-07 07:49:24.617265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.818 [2024-10-07 07:49:24.617271] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.818 [2024-10-07 07:49:24.617285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.818 qpair failed and we were unable to recover it. 00:30:20.818 [2024-10-07 07:49:24.627172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.818 [2024-10-07 07:49:24.627247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.818 [2024-10-07 07:49:24.627262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.818 [2024-10-07 07:49:24.627268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.818 [2024-10-07 07:49:24.627274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.818 [2024-10-07 07:49:24.627289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.818 qpair failed and we were unable to recover it. 00:30:20.818 [2024-10-07 07:49:24.637185] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.818 [2024-10-07 07:49:24.637256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.818 [2024-10-07 07:49:24.637270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.818 [2024-10-07 07:49:24.637277] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.818 [2024-10-07 07:49:24.637283] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.818 [2024-10-07 07:49:24.637298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.818 qpair failed and we were unable to recover it. 00:30:20.818 [2024-10-07 07:49:24.647177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.818 [2024-10-07 07:49:24.647247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.818 [2024-10-07 07:49:24.647261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.818 [2024-10-07 07:49:24.647267] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.818 [2024-10-07 07:49:24.647273] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.818 [2024-10-07 07:49:24.647287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.818 qpair failed and we were unable to recover it. 00:30:20.818 [2024-10-07 07:49:24.657307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.818 [2024-10-07 07:49:24.657404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.818 [2024-10-07 07:49:24.657418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.818 [2024-10-07 07:49:24.657425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.818 [2024-10-07 07:49:24.657434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.818 [2024-10-07 07:49:24.657449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.818 qpair failed and we were unable to recover it. 00:30:20.818 [2024-10-07 07:49:24.667271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.818 [2024-10-07 07:49:24.667375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.818 [2024-10-07 07:49:24.667389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.818 [2024-10-07 07:49:24.667396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.818 [2024-10-07 07:49:24.667402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.818 [2024-10-07 07:49:24.667417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.818 qpair failed and we were unable to recover it. 00:30:20.818 [2024-10-07 07:49:24.677327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.818 [2024-10-07 07:49:24.677399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.818 [2024-10-07 07:49:24.677413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.818 [2024-10-07 07:49:24.677420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.818 [2024-10-07 07:49:24.677427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.818 [2024-10-07 07:49:24.677442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.818 qpair failed and we were unable to recover it. 00:30:20.819 [2024-10-07 07:49:24.687330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.819 [2024-10-07 07:49:24.687392] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.819 [2024-10-07 07:49:24.687407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.819 [2024-10-07 07:49:24.687413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.819 [2024-10-07 07:49:24.687419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.819 [2024-10-07 07:49:24.687434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.819 qpair failed and we were unable to recover it. 00:30:20.819 [2024-10-07 07:49:24.697280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.819 [2024-10-07 07:49:24.697348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.819 [2024-10-07 07:49:24.697362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.819 [2024-10-07 07:49:24.697369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.819 [2024-10-07 07:49:24.697374] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.819 [2024-10-07 07:49:24.697389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.819 qpair failed and we were unable to recover it. 00:30:20.819 [2024-10-07 07:49:24.707438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.819 [2024-10-07 07:49:24.707547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.819 [2024-10-07 07:49:24.707561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.819 [2024-10-07 07:49:24.707568] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.819 [2024-10-07 07:49:24.707574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.819 [2024-10-07 07:49:24.707588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.819 qpair failed and we were unable to recover it. 00:30:20.819 [2024-10-07 07:49:24.717376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.819 [2024-10-07 07:49:24.717444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.819 [2024-10-07 07:49:24.717457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.819 [2024-10-07 07:49:24.717464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.819 [2024-10-07 07:49:24.717470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.819 [2024-10-07 07:49:24.717484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.819 qpair failed and we were unable to recover it. 00:30:20.819 [2024-10-07 07:49:24.727419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.819 [2024-10-07 07:49:24.727513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.819 [2024-10-07 07:49:24.727527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.819 [2024-10-07 07:49:24.727533] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.819 [2024-10-07 07:49:24.727539] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.819 [2024-10-07 07:49:24.727554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.819 qpair failed and we were unable to recover it. 00:30:20.819 [2024-10-07 07:49:24.737495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.819 [2024-10-07 07:49:24.737612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.819 [2024-10-07 07:49:24.737627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.819 [2024-10-07 07:49:24.737633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.819 [2024-10-07 07:49:24.737639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.819 [2024-10-07 07:49:24.737654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.819 qpair failed and we were unable to recover it. 00:30:20.819 [2024-10-07 07:49:24.747535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.819 [2024-10-07 07:49:24.747646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.819 [2024-10-07 07:49:24.747667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.819 [2024-10-07 07:49:24.747678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.819 [2024-10-07 07:49:24.747685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.819 [2024-10-07 07:49:24.747699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.819 qpair failed and we were unable to recover it. 00:30:20.819 [2024-10-07 07:49:24.757557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.819 [2024-10-07 07:49:24.757667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.819 [2024-10-07 07:49:24.757680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.819 [2024-10-07 07:49:24.757687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.819 [2024-10-07 07:49:24.757693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.819 [2024-10-07 07:49:24.757708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.819 qpair failed and we were unable to recover it. 00:30:20.819 [2024-10-07 07:49:24.767572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.819 [2024-10-07 07:49:24.767648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.819 [2024-10-07 07:49:24.767662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.819 [2024-10-07 07:49:24.767669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.819 [2024-10-07 07:49:24.767674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.819 [2024-10-07 07:49:24.767689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.819 qpair failed and we were unable to recover it. 00:30:20.819 [2024-10-07 07:49:24.777584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.819 [2024-10-07 07:49:24.777654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.819 [2024-10-07 07:49:24.777669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.819 [2024-10-07 07:49:24.777676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.819 [2024-10-07 07:49:24.777682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:20.819 [2024-10-07 07:49:24.777697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.819 qpair failed and we were unable to recover it. 00:30:21.081 [2024-10-07 07:49:24.787623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.081 [2024-10-07 07:49:24.787712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.081 [2024-10-07 07:49:24.787726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.081 [2024-10-07 07:49:24.787732] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.081 [2024-10-07 07:49:24.787738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.081 [2024-10-07 07:49:24.787752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.081 qpair failed and we were unable to recover it. 00:30:21.081 [2024-10-07 07:49:24.797661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.081 [2024-10-07 07:49:24.797727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.081 [2024-10-07 07:49:24.797741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.081 [2024-10-07 07:49:24.797748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.081 [2024-10-07 07:49:24.797754] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.081 [2024-10-07 07:49:24.797768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.081 qpair failed and we were unable to recover it. 00:30:21.081 [2024-10-07 07:49:24.807676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.081 [2024-10-07 07:49:24.807747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.081 [2024-10-07 07:49:24.807761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.081 [2024-10-07 07:49:24.807768] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.081 [2024-10-07 07:49:24.807773] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.081 [2024-10-07 07:49:24.807788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.081 qpair failed and we were unable to recover it. 00:30:21.081 [2024-10-07 07:49:24.817746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.081 [2024-10-07 07:49:24.817845] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.081 [2024-10-07 07:49:24.817859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.081 [2024-10-07 07:49:24.817865] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.081 [2024-10-07 07:49:24.817871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.081 [2024-10-07 07:49:24.817885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.081 qpair failed and we were unable to recover it. 00:30:21.081 [2024-10-07 07:49:24.827734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.081 [2024-10-07 07:49:24.827805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.081 [2024-10-07 07:49:24.827820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.081 [2024-10-07 07:49:24.827827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.081 [2024-10-07 07:49:24.827833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.081 [2024-10-07 07:49:24.827848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.081 qpair failed and we were unable to recover it. 00:30:21.081 [2024-10-07 07:49:24.837771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.081 [2024-10-07 07:49:24.837841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.081 [2024-10-07 07:49:24.837859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.081 [2024-10-07 07:49:24.837865] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.081 [2024-10-07 07:49:24.837871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.081 [2024-10-07 07:49:24.837886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.081 qpair failed and we were unable to recover it. 00:30:21.081 [2024-10-07 07:49:24.847711] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.081 [2024-10-07 07:49:24.847771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.081 [2024-10-07 07:49:24.847785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.081 [2024-10-07 07:49:24.847792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.081 [2024-10-07 07:49:24.847798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.081 [2024-10-07 07:49:24.847812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.081 qpair failed and we were unable to recover it. 00:30:21.081 [2024-10-07 07:49:24.857824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.081 [2024-10-07 07:49:24.857947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.081 [2024-10-07 07:49:24.857963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.082 [2024-10-07 07:49:24.857970] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.082 [2024-10-07 07:49:24.857976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.082 [2024-10-07 07:49:24.857991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.082 qpair failed and we were unable to recover it. 00:30:21.082 [2024-10-07 07:49:24.867769] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.082 [2024-10-07 07:49:24.867833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.082 [2024-10-07 07:49:24.867848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.082 [2024-10-07 07:49:24.867855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.082 [2024-10-07 07:49:24.867861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.082 [2024-10-07 07:49:24.867875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.082 qpair failed and we were unable to recover it. 00:30:21.082 [2024-10-07 07:49:24.877888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.082 [2024-10-07 07:49:24.877993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.082 [2024-10-07 07:49:24.878007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.082 [2024-10-07 07:49:24.878014] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.082 [2024-10-07 07:49:24.878021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.082 [2024-10-07 07:49:24.878037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.082 qpair failed and we were unable to recover it. 00:30:21.082 [2024-10-07 07:49:24.887936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.082 [2024-10-07 07:49:24.888004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.082 [2024-10-07 07:49:24.888018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.082 [2024-10-07 07:49:24.888025] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.082 [2024-10-07 07:49:24.888031] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.082 [2024-10-07 07:49:24.888046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.082 qpair failed and we were unable to recover it. 00:30:21.082 [2024-10-07 07:49:24.897971] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.082 [2024-10-07 07:49:24.898035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.082 [2024-10-07 07:49:24.898049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.082 [2024-10-07 07:49:24.898056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.082 [2024-10-07 07:49:24.898065] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.082 [2024-10-07 07:49:24.898080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.082 qpair failed and we were unable to recover it. 00:30:21.082 [2024-10-07 07:49:24.907985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.082 [2024-10-07 07:49:24.908061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.082 [2024-10-07 07:49:24.908075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.082 [2024-10-07 07:49:24.908082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.082 [2024-10-07 07:49:24.908088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.082 [2024-10-07 07:49:24.908102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.082 qpair failed and we were unable to recover it. 00:30:21.082 [2024-10-07 07:49:24.918006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.082 [2024-10-07 07:49:24.918078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.082 [2024-10-07 07:49:24.918092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.082 [2024-10-07 07:49:24.918099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.082 [2024-10-07 07:49:24.918105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.082 [2024-10-07 07:49:24.918119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.082 qpair failed and we were unable to recover it. 00:30:21.082 [2024-10-07 07:49:24.928069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.082 [2024-10-07 07:49:24.928176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.082 [2024-10-07 07:49:24.928198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.082 [2024-10-07 07:49:24.928204] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.082 [2024-10-07 07:49:24.928211] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.082 [2024-10-07 07:49:24.928226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.082 qpair failed and we were unable to recover it. 00:30:21.082 [2024-10-07 07:49:24.938052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.082 [2024-10-07 07:49:24.938125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.082 [2024-10-07 07:49:24.938139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.082 [2024-10-07 07:49:24.938145] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.082 [2024-10-07 07:49:24.938151] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.082 [2024-10-07 07:49:24.938166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.082 qpair failed and we were unable to recover it. 00:30:21.082 [2024-10-07 07:49:24.948086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.082 [2024-10-07 07:49:24.948163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.082 [2024-10-07 07:49:24.948177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.082 [2024-10-07 07:49:24.948184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.082 [2024-10-07 07:49:24.948190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.082 [2024-10-07 07:49:24.948205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.082 qpair failed and we were unable to recover it. 00:30:21.082 [2024-10-07 07:49:24.958178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.082 [2024-10-07 07:49:24.958259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.082 [2024-10-07 07:49:24.958274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.082 [2024-10-07 07:49:24.958281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.082 [2024-10-07 07:49:24.958286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.082 [2024-10-07 07:49:24.958302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.082 qpair failed and we were unable to recover it. 00:30:21.083 [2024-10-07 07:49:24.968250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.083 [2024-10-07 07:49:24.968336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.083 [2024-10-07 07:49:24.968351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.083 [2024-10-07 07:49:24.968358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.083 [2024-10-07 07:49:24.968363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.083 [2024-10-07 07:49:24.968382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.083 qpair failed and we were unable to recover it. 00:30:21.083 [2024-10-07 07:49:24.978203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.083 [2024-10-07 07:49:24.978268] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.083 [2024-10-07 07:49:24.978282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.083 [2024-10-07 07:49:24.978289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.083 [2024-10-07 07:49:24.978295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.083 [2024-10-07 07:49:24.978310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.083 qpair failed and we were unable to recover it. 00:30:21.083 [2024-10-07 07:49:24.988179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.083 [2024-10-07 07:49:24.988248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.083 [2024-10-07 07:49:24.988263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.083 [2024-10-07 07:49:24.988270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.083 [2024-10-07 07:49:24.988276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.083 [2024-10-07 07:49:24.988290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.083 qpair failed and we were unable to recover it. 00:30:21.083 [2024-10-07 07:49:24.998269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.083 [2024-10-07 07:49:24.998345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.083 [2024-10-07 07:49:24.998360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.083 [2024-10-07 07:49:24.998367] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.083 [2024-10-07 07:49:24.998372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.083 [2024-10-07 07:49:24.998387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.083 qpair failed and we were unable to recover it. 00:30:21.083 [2024-10-07 07:49:25.008266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.083 [2024-10-07 07:49:25.008335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.083 [2024-10-07 07:49:25.008349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.083 [2024-10-07 07:49:25.008356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.083 [2024-10-07 07:49:25.008362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.083 [2024-10-07 07:49:25.008377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.083 qpair failed and we were unable to recover it. 00:30:21.083 [2024-10-07 07:49:25.018347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.083 [2024-10-07 07:49:25.018453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.083 [2024-10-07 07:49:25.018477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.083 [2024-10-07 07:49:25.018484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.083 [2024-10-07 07:49:25.018490] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.083 [2024-10-07 07:49:25.018505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.083 qpair failed and we were unable to recover it. 00:30:21.083 [2024-10-07 07:49:25.028362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.083 [2024-10-07 07:49:25.028430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.083 [2024-10-07 07:49:25.028445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.083 [2024-10-07 07:49:25.028451] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.083 [2024-10-07 07:49:25.028457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.083 [2024-10-07 07:49:25.028471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.083 qpair failed and we were unable to recover it. 00:30:21.083 [2024-10-07 07:49:25.038302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.083 [2024-10-07 07:49:25.038380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.083 [2024-10-07 07:49:25.038394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.083 [2024-10-07 07:49:25.038400] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.083 [2024-10-07 07:49:25.038406] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.083 [2024-10-07 07:49:25.038420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.083 qpair failed and we were unable to recover it. 00:30:21.083 [2024-10-07 07:49:25.048381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.083 [2024-10-07 07:49:25.048455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.083 [2024-10-07 07:49:25.048477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.083 [2024-10-07 07:49:25.048484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.083 [2024-10-07 07:49:25.048489] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.083 [2024-10-07 07:49:25.048508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.083 qpair failed and we were unable to recover it. 00:30:21.345 [2024-10-07 07:49:25.058422] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.345 [2024-10-07 07:49:25.058491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.345 [2024-10-07 07:49:25.058506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.345 [2024-10-07 07:49:25.058512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.345 [2024-10-07 07:49:25.058521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.345 [2024-10-07 07:49:25.058536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.345 qpair failed and we were unable to recover it. 00:30:21.345 [2024-10-07 07:49:25.068497] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.345 [2024-10-07 07:49:25.068568] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.345 [2024-10-07 07:49:25.068583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.345 [2024-10-07 07:49:25.068590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.345 [2024-10-07 07:49:25.068596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.345 [2024-10-07 07:49:25.068611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.345 qpair failed and we were unable to recover it. 00:30:21.345 [2024-10-07 07:49:25.078411] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.345 [2024-10-07 07:49:25.078477] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.345 [2024-10-07 07:49:25.078491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.345 [2024-10-07 07:49:25.078498] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.345 [2024-10-07 07:49:25.078504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.345 [2024-10-07 07:49:25.078519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.345 qpair failed and we were unable to recover it. 00:30:21.345 [2024-10-07 07:49:25.088501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.345 [2024-10-07 07:49:25.088593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.345 [2024-10-07 07:49:25.088607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.345 [2024-10-07 07:49:25.088613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.345 [2024-10-07 07:49:25.088619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.345 [2024-10-07 07:49:25.088634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.345 qpair failed and we were unable to recover it. 00:30:21.345 [2024-10-07 07:49:25.098576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.345 [2024-10-07 07:49:25.098658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.345 [2024-10-07 07:49:25.098673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.345 [2024-10-07 07:49:25.098679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.345 [2024-10-07 07:49:25.098685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.345 [2024-10-07 07:49:25.098699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.345 qpair failed and we were unable to recover it. 00:30:21.345 [2024-10-07 07:49:25.108569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.345 [2024-10-07 07:49:25.108641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.345 [2024-10-07 07:49:25.108655] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.345 [2024-10-07 07:49:25.108662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.345 [2024-10-07 07:49:25.108668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.345 [2024-10-07 07:49:25.108683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.345 qpair failed and we were unable to recover it. 00:30:21.345 [2024-10-07 07:49:25.118595] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.345 [2024-10-07 07:49:25.118666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.345 [2024-10-07 07:49:25.118680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.345 [2024-10-07 07:49:25.118687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.345 [2024-10-07 07:49:25.118693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.345 [2024-10-07 07:49:25.118708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.345 qpair failed and we were unable to recover it. 00:30:21.346 [2024-10-07 07:49:25.128615] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.346 [2024-10-07 07:49:25.128705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.346 [2024-10-07 07:49:25.128720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.346 [2024-10-07 07:49:25.128726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.346 [2024-10-07 07:49:25.128732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.346 [2024-10-07 07:49:25.128746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.346 qpair failed and we were unable to recover it. 00:30:21.346 [2024-10-07 07:49:25.138668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.346 [2024-10-07 07:49:25.138737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.346 [2024-10-07 07:49:25.138751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.346 [2024-10-07 07:49:25.138757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.346 [2024-10-07 07:49:25.138763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.346 [2024-10-07 07:49:25.138778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.346 qpair failed and we were unable to recover it. 00:30:21.346 [2024-10-07 07:49:25.148680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.346 [2024-10-07 07:49:25.148754] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.346 [2024-10-07 07:49:25.148768] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.346 [2024-10-07 07:49:25.148774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.346 [2024-10-07 07:49:25.148783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.346 [2024-10-07 07:49:25.148798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.346 qpair failed and we were unable to recover it. 00:30:21.346 [2024-10-07 07:49:25.158701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.346 [2024-10-07 07:49:25.158782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.346 [2024-10-07 07:49:25.158796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.346 [2024-10-07 07:49:25.158803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.346 [2024-10-07 07:49:25.158809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.346 [2024-10-07 07:49:25.158824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.346 qpair failed and we were unable to recover it. 00:30:21.346 [2024-10-07 07:49:25.168735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.346 [2024-10-07 07:49:25.168801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.346 [2024-10-07 07:49:25.168816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.346 [2024-10-07 07:49:25.168823] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.346 [2024-10-07 07:49:25.168828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.346 [2024-10-07 07:49:25.168843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.346 qpair failed and we were unable to recover it. 00:30:21.346 [2024-10-07 07:49:25.178779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.346 [2024-10-07 07:49:25.178887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.346 [2024-10-07 07:49:25.178902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.346 [2024-10-07 07:49:25.178909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.346 [2024-10-07 07:49:25.178914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.346 [2024-10-07 07:49:25.178929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.346 qpair failed and we were unable to recover it. 00:30:21.346 [2024-10-07 07:49:25.188791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.346 [2024-10-07 07:49:25.188862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.346 [2024-10-07 07:49:25.188877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.346 [2024-10-07 07:49:25.188883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.346 [2024-10-07 07:49:25.188889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.346 [2024-10-07 07:49:25.188904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.346 qpair failed and we were unable to recover it. 00:30:21.346 [2024-10-07 07:49:25.198815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.346 [2024-10-07 07:49:25.198883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.346 [2024-10-07 07:49:25.198897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.346 [2024-10-07 07:49:25.198904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.346 [2024-10-07 07:49:25.198910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.346 [2024-10-07 07:49:25.198925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.346 qpair failed and we were unable to recover it. 00:30:21.346 [2024-10-07 07:49:25.208843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.346 [2024-10-07 07:49:25.208920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.346 [2024-10-07 07:49:25.208934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.346 [2024-10-07 07:49:25.208940] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.346 [2024-10-07 07:49:25.208946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.346 [2024-10-07 07:49:25.208961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.346 qpair failed and we were unable to recover it. 00:30:21.346 [2024-10-07 07:49:25.218880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.346 [2024-10-07 07:49:25.218951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.346 [2024-10-07 07:49:25.218965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.346 [2024-10-07 07:49:25.218972] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.346 [2024-10-07 07:49:25.218978] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.346 [2024-10-07 07:49:25.218993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.346 qpair failed and we were unable to recover it. 00:30:21.346 [2024-10-07 07:49:25.228940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.346 [2024-10-07 07:49:25.229044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.346 [2024-10-07 07:49:25.229062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.346 [2024-10-07 07:49:25.229069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.346 [2024-10-07 07:49:25.229075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.346 [2024-10-07 07:49:25.229090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.346 qpair failed and we were unable to recover it. 00:30:21.346 [2024-10-07 07:49:25.238928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.346 [2024-10-07 07:49:25.238998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.346 [2024-10-07 07:49:25.239012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.346 [2024-10-07 07:49:25.239022] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.346 [2024-10-07 07:49:25.239028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.346 [2024-10-07 07:49:25.239043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.346 qpair failed and we were unable to recover it. 00:30:21.346 [2024-10-07 07:49:25.248971] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.346 [2024-10-07 07:49:25.249074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.346 [2024-10-07 07:49:25.249088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.346 [2024-10-07 07:49:25.249095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.346 [2024-10-07 07:49:25.249101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.346 [2024-10-07 07:49:25.249116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.346 qpair failed and we were unable to recover it. 00:30:21.346 [2024-10-07 07:49:25.258986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.346 [2024-10-07 07:49:25.259056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.346 [2024-10-07 07:49:25.259073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.347 [2024-10-07 07:49:25.259080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.347 [2024-10-07 07:49:25.259086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.347 [2024-10-07 07:49:25.259101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.347 qpair failed and we were unable to recover it. 00:30:21.347 [2024-10-07 07:49:25.269012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.347 [2024-10-07 07:49:25.269101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.347 [2024-10-07 07:49:25.269115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.347 [2024-10-07 07:49:25.269122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.347 [2024-10-07 07:49:25.269127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.347 [2024-10-07 07:49:25.269142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.347 qpair failed and we were unable to recover it. 00:30:21.347 [2024-10-07 07:49:25.279043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.347 [2024-10-07 07:49:25.279121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.347 [2024-10-07 07:49:25.279136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.347 [2024-10-07 07:49:25.279143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.347 [2024-10-07 07:49:25.279149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.347 [2024-10-07 07:49:25.279164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.347 qpair failed and we were unable to recover it. 00:30:21.347 [2024-10-07 07:49:25.289068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.347 [2024-10-07 07:49:25.289139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.347 [2024-10-07 07:49:25.289153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.347 [2024-10-07 07:49:25.289161] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.347 [2024-10-07 07:49:25.289166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.347 [2024-10-07 07:49:25.289181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.347 qpair failed and we were unable to recover it. 00:30:21.347 [2024-10-07 07:49:25.299085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.347 [2024-10-07 07:49:25.299198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.347 [2024-10-07 07:49:25.299213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.347 [2024-10-07 07:49:25.299220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.347 [2024-10-07 07:49:25.299226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.347 [2024-10-07 07:49:25.299242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.347 qpair failed and we were unable to recover it. 00:30:21.347 [2024-10-07 07:49:25.309159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.347 [2024-10-07 07:49:25.309229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.347 [2024-10-07 07:49:25.309243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.347 [2024-10-07 07:49:25.309249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.347 [2024-10-07 07:49:25.309255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.347 [2024-10-07 07:49:25.309270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.347 qpair failed and we were unable to recover it. 00:30:21.607 [2024-10-07 07:49:25.319133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.607 [2024-10-07 07:49:25.319204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.607 [2024-10-07 07:49:25.319219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.607 [2024-10-07 07:49:25.319226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.607 [2024-10-07 07:49:25.319232] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.607 [2024-10-07 07:49:25.319246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-10-07 07:49:25.329173] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.607 [2024-10-07 07:49:25.329236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.607 [2024-10-07 07:49:25.329250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.607 [2024-10-07 07:49:25.329260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.607 [2024-10-07 07:49:25.329265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.607 [2024-10-07 07:49:25.329281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-10-07 07:49:25.339242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.607 [2024-10-07 07:49:25.339355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.607 [2024-10-07 07:49:25.339375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.607 [2024-10-07 07:49:25.339381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.607 [2024-10-07 07:49:25.339387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.607 [2024-10-07 07:49:25.339402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.607 qpair failed and we were unable to recover it. 00:30:21.607 [2024-10-07 07:49:25.349215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.607 [2024-10-07 07:49:25.349289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.607 [2024-10-07 07:49:25.349304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.607 [2024-10-07 07:49:25.349311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.608 [2024-10-07 07:49:25.349317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.608 [2024-10-07 07:49:25.349332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-10-07 07:49:25.359258] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.608 [2024-10-07 07:49:25.359329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.608 [2024-10-07 07:49:25.359343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.608 [2024-10-07 07:49:25.359350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.608 [2024-10-07 07:49:25.359356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.608 [2024-10-07 07:49:25.359370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-10-07 07:49:25.369341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.608 [2024-10-07 07:49:25.369442] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.608 [2024-10-07 07:49:25.369456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.608 [2024-10-07 07:49:25.369464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.608 [2024-10-07 07:49:25.369470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.608 [2024-10-07 07:49:25.369485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-10-07 07:49:25.379342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.608 [2024-10-07 07:49:25.379412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.608 [2024-10-07 07:49:25.379426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.608 [2024-10-07 07:49:25.379433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.608 [2024-10-07 07:49:25.379438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.608 [2024-10-07 07:49:25.379454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-10-07 07:49:25.389358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.608 [2024-10-07 07:49:25.389445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.608 [2024-10-07 07:49:25.389460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.608 [2024-10-07 07:49:25.389466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.608 [2024-10-07 07:49:25.389472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.608 [2024-10-07 07:49:25.389487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-10-07 07:49:25.399375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.608 [2024-10-07 07:49:25.399454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.608 [2024-10-07 07:49:25.399468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.608 [2024-10-07 07:49:25.399475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.608 [2024-10-07 07:49:25.399481] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.608 [2024-10-07 07:49:25.399495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-10-07 07:49:25.409414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.608 [2024-10-07 07:49:25.409495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.608 [2024-10-07 07:49:25.409508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.608 [2024-10-07 07:49:25.409515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.608 [2024-10-07 07:49:25.409521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.608 [2024-10-07 07:49:25.409536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-10-07 07:49:25.419365] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.608 [2024-10-07 07:49:25.419434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.608 [2024-10-07 07:49:25.419452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.608 [2024-10-07 07:49:25.419459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.608 [2024-10-07 07:49:25.419464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.608 [2024-10-07 07:49:25.419479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-10-07 07:49:25.429499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.608 [2024-10-07 07:49:25.429606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.608 [2024-10-07 07:49:25.429620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.608 [2024-10-07 07:49:25.429627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.608 [2024-10-07 07:49:25.429633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.608 [2024-10-07 07:49:25.429648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-10-07 07:49:25.439519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.608 [2024-10-07 07:49:25.439588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.608 [2024-10-07 07:49:25.439602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.608 [2024-10-07 07:49:25.439610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.608 [2024-10-07 07:49:25.439616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.608 [2024-10-07 07:49:25.439630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-10-07 07:49:25.449554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.608 [2024-10-07 07:49:25.449652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.608 [2024-10-07 07:49:25.449667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.608 [2024-10-07 07:49:25.449673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.608 [2024-10-07 07:49:25.449679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.608 [2024-10-07 07:49:25.449694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-10-07 07:49:25.459555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.608 [2024-10-07 07:49:25.459629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.608 [2024-10-07 07:49:25.459643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.608 [2024-10-07 07:49:25.459650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.608 [2024-10-07 07:49:25.459655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.608 [2024-10-07 07:49:25.459674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.608 qpair failed and we were unable to recover it. 00:30:21.608 [2024-10-07 07:49:25.469596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.608 [2024-10-07 07:49:25.469667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.609 [2024-10-07 07:49:25.469681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.609 [2024-10-07 07:49:25.469687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.609 [2024-10-07 07:49:25.469693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.609 [2024-10-07 07:49:25.469707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-10-07 07:49:25.479591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.609 [2024-10-07 07:49:25.479664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.609 [2024-10-07 07:49:25.479678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.609 [2024-10-07 07:49:25.479685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.609 [2024-10-07 07:49:25.479691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.609 [2024-10-07 07:49:25.479705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-10-07 07:49:25.489634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.609 [2024-10-07 07:49:25.489733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.609 [2024-10-07 07:49:25.489747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.609 [2024-10-07 07:49:25.489753] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.609 [2024-10-07 07:49:25.489759] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.609 [2024-10-07 07:49:25.489774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-10-07 07:49:25.499671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.609 [2024-10-07 07:49:25.499744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.609 [2024-10-07 07:49:25.499759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.609 [2024-10-07 07:49:25.499765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.609 [2024-10-07 07:49:25.499771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.609 [2024-10-07 07:49:25.499785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-10-07 07:49:25.509690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.609 [2024-10-07 07:49:25.509755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.609 [2024-10-07 07:49:25.509772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.609 [2024-10-07 07:49:25.509779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.609 [2024-10-07 07:49:25.509785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.609 [2024-10-07 07:49:25.509799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-10-07 07:49:25.519716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.609 [2024-10-07 07:49:25.519787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.609 [2024-10-07 07:49:25.519801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.609 [2024-10-07 07:49:25.519808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.609 [2024-10-07 07:49:25.519814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.609 [2024-10-07 07:49:25.519829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-10-07 07:49:25.529745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.609 [2024-10-07 07:49:25.529831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.609 [2024-10-07 07:49:25.529845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.609 [2024-10-07 07:49:25.529851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.609 [2024-10-07 07:49:25.529857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.609 [2024-10-07 07:49:25.529872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-10-07 07:49:25.539778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.609 [2024-10-07 07:49:25.539845] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.609 [2024-10-07 07:49:25.539860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.609 [2024-10-07 07:49:25.539867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.609 [2024-10-07 07:49:25.539872] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.609 [2024-10-07 07:49:25.539887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-10-07 07:49:25.549857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.609 [2024-10-07 07:49:25.549925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.609 [2024-10-07 07:49:25.549940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.609 [2024-10-07 07:49:25.549946] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.609 [2024-10-07 07:49:25.549953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.609 [2024-10-07 07:49:25.549970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-10-07 07:49:25.559771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.609 [2024-10-07 07:49:25.559863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.609 [2024-10-07 07:49:25.559877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.609 [2024-10-07 07:49:25.559885] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.609 [2024-10-07 07:49:25.559891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.609 [2024-10-07 07:49:25.559906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.609 qpair failed and we were unable to recover it. 00:30:21.609 [2024-10-07 07:49:25.569875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.609 [2024-10-07 07:49:25.569981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.609 [2024-10-07 07:49:25.569997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.609 [2024-10-07 07:49:25.570004] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.610 [2024-10-07 07:49:25.570010] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.610 [2024-10-07 07:49:25.570025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.610 qpair failed and we were unable to recover it. 00:30:21.871 [2024-10-07 07:49:25.579915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.871 [2024-10-07 07:49:25.579988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.871 [2024-10-07 07:49:25.580003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.871 [2024-10-07 07:49:25.580010] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.871 [2024-10-07 07:49:25.580016] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.871 [2024-10-07 07:49:25.580032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.871 qpair failed and we were unable to recover it. 00:30:21.871 [2024-10-07 07:49:25.589964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.871 [2024-10-07 07:49:25.590035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.871 [2024-10-07 07:49:25.590050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.871 [2024-10-07 07:49:25.590056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.871 [2024-10-07 07:49:25.590067] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.871 [2024-10-07 07:49:25.590082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.871 qpair failed and we were unable to recover it. 00:30:21.871 [2024-10-07 07:49:25.599926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.871 [2024-10-07 07:49:25.600055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.871 [2024-10-07 07:49:25.600074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.871 [2024-10-07 07:49:25.600081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.871 [2024-10-07 07:49:25.600087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.871 [2024-10-07 07:49:25.600103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.871 qpair failed and we were unable to recover it. 00:30:21.871 [2024-10-07 07:49:25.609963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.871 [2024-10-07 07:49:25.610036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.871 [2024-10-07 07:49:25.610051] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.871 [2024-10-07 07:49:25.610062] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.871 [2024-10-07 07:49:25.610068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.871 [2024-10-07 07:49:25.610083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.871 qpair failed and we were unable to recover it. 00:30:21.871 [2024-10-07 07:49:25.619995] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.871 [2024-10-07 07:49:25.620104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.871 [2024-10-07 07:49:25.620119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.871 [2024-10-07 07:49:25.620126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.871 [2024-10-07 07:49:25.620132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.871 [2024-10-07 07:49:25.620154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.871 qpair failed and we were unable to recover it. 00:30:21.871 [2024-10-07 07:49:25.630051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.871 [2024-10-07 07:49:25.630166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.871 [2024-10-07 07:49:25.630186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.871 [2024-10-07 07:49:25.630193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.871 [2024-10-07 07:49:25.630199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.871 [2024-10-07 07:49:25.630215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.871 qpair failed and we were unable to recover it. 00:30:21.871 [2024-10-07 07:49:25.640053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.871 [2024-10-07 07:49:25.640122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.871 [2024-10-07 07:49:25.640136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.871 [2024-10-07 07:49:25.640142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.871 [2024-10-07 07:49:25.640151] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.871 [2024-10-07 07:49:25.640166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.871 qpair failed and we were unable to recover it. 00:30:21.871 [2024-10-07 07:49:25.650080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.871 [2024-10-07 07:49:25.650154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.871 [2024-10-07 07:49:25.650168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.871 [2024-10-07 07:49:25.650175] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.871 [2024-10-07 07:49:25.650181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.871 [2024-10-07 07:49:25.650195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.871 qpair failed and we were unable to recover it. 00:30:21.871 [2024-10-07 07:49:25.660156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.871 [2024-10-07 07:49:25.660222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.871 [2024-10-07 07:49:25.660237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.871 [2024-10-07 07:49:25.660243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.871 [2024-10-07 07:49:25.660249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.871 [2024-10-07 07:49:25.660264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.871 qpair failed and we were unable to recover it. 00:30:21.871 [2024-10-07 07:49:25.670139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.871 [2024-10-07 07:49:25.670209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.871 [2024-10-07 07:49:25.670224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.871 [2024-10-07 07:49:25.670230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.871 [2024-10-07 07:49:25.670236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.871 [2024-10-07 07:49:25.670251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.871 qpair failed and we were unable to recover it. 00:30:21.871 [2024-10-07 07:49:25.680161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.871 [2024-10-07 07:49:25.680229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.871 [2024-10-07 07:49:25.680243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.871 [2024-10-07 07:49:25.680250] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.871 [2024-10-07 07:49:25.680256] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.871 [2024-10-07 07:49:25.680271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.871 qpair failed and we were unable to recover it. 00:30:21.871 [2024-10-07 07:49:25.690241] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.872 [2024-10-07 07:49:25.690344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.872 [2024-10-07 07:49:25.690358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.872 [2024-10-07 07:49:25.690364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.872 [2024-10-07 07:49:25.690370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.872 [2024-10-07 07:49:25.690385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.872 qpair failed and we were unable to recover it. 00:30:21.872 [2024-10-07 07:49:25.700228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.872 [2024-10-07 07:49:25.700315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.872 [2024-10-07 07:49:25.700329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.872 [2024-10-07 07:49:25.700336] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.872 [2024-10-07 07:49:25.700342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.872 [2024-10-07 07:49:25.700356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.872 qpair failed and we were unable to recover it. 00:30:21.872 [2024-10-07 07:49:25.710222] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.872 [2024-10-07 07:49:25.710290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.872 [2024-10-07 07:49:25.710304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.872 [2024-10-07 07:49:25.710311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.872 [2024-10-07 07:49:25.710316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.872 [2024-10-07 07:49:25.710331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.872 qpair failed and we were unable to recover it. 00:30:21.872 [2024-10-07 07:49:25.720262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.872 [2024-10-07 07:49:25.720334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.872 [2024-10-07 07:49:25.720348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.872 [2024-10-07 07:49:25.720354] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.872 [2024-10-07 07:49:25.720360] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.872 [2024-10-07 07:49:25.720374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.872 qpair failed and we were unable to recover it. 00:30:21.872 [2024-10-07 07:49:25.730267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.872 [2024-10-07 07:49:25.730333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.872 [2024-10-07 07:49:25.730347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.872 [2024-10-07 07:49:25.730356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.872 [2024-10-07 07:49:25.730362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.872 [2024-10-07 07:49:25.730377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.872 qpair failed and we were unable to recover it. 00:30:21.872 [2024-10-07 07:49:25.740304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.872 [2024-10-07 07:49:25.740404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.872 [2024-10-07 07:49:25.740418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.872 [2024-10-07 07:49:25.740424] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.872 [2024-10-07 07:49:25.740431] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.872 [2024-10-07 07:49:25.740446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.872 qpair failed and we were unable to recover it. 00:30:21.872 [2024-10-07 07:49:25.750334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.872 [2024-10-07 07:49:25.750422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.872 [2024-10-07 07:49:25.750436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.872 [2024-10-07 07:49:25.750442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.872 [2024-10-07 07:49:25.750448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.872 [2024-10-07 07:49:25.750462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.872 qpair failed and we were unable to recover it. 00:30:21.872 [2024-10-07 07:49:25.760405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.872 [2024-10-07 07:49:25.760478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.872 [2024-10-07 07:49:25.760492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.872 [2024-10-07 07:49:25.760499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.872 [2024-10-07 07:49:25.760504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.872 [2024-10-07 07:49:25.760519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.872 qpair failed and we were unable to recover it. 00:30:21.872 [2024-10-07 07:49:25.770385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.872 [2024-10-07 07:49:25.770453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.872 [2024-10-07 07:49:25.770468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.872 [2024-10-07 07:49:25.770475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.872 [2024-10-07 07:49:25.770480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.872 [2024-10-07 07:49:25.770495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.872 qpair failed and we were unable to recover it. 00:30:21.872 [2024-10-07 07:49:25.780426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.872 [2024-10-07 07:49:25.780492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.872 [2024-10-07 07:49:25.780507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.872 [2024-10-07 07:49:25.780514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.872 [2024-10-07 07:49:25.780521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.872 [2024-10-07 07:49:25.780535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.872 qpair failed and we were unable to recover it. 00:30:21.872 [2024-10-07 07:49:25.790468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.872 [2024-10-07 07:49:25.790536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.872 [2024-10-07 07:49:25.790550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.872 [2024-10-07 07:49:25.790556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.872 [2024-10-07 07:49:25.790562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.872 [2024-10-07 07:49:25.790577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.872 qpair failed and we were unable to recover it. 00:30:21.872 [2024-10-07 07:49:25.800424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.872 [2024-10-07 07:49:25.800495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.872 [2024-10-07 07:49:25.800509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.872 [2024-10-07 07:49:25.800516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.872 [2024-10-07 07:49:25.800521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.872 [2024-10-07 07:49:25.800536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.872 qpair failed and we were unable to recover it. 00:30:21.872 [2024-10-07 07:49:25.810520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.872 [2024-10-07 07:49:25.810590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.872 [2024-10-07 07:49:25.810604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.872 [2024-10-07 07:49:25.810611] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.872 [2024-10-07 07:49:25.810617] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.872 [2024-10-07 07:49:25.810631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.872 qpair failed and we were unable to recover it. 00:30:21.872 [2024-10-07 07:49:25.820547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.872 [2024-10-07 07:49:25.820612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.872 [2024-10-07 07:49:25.820627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.872 [2024-10-07 07:49:25.820636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.873 [2024-10-07 07:49:25.820642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.873 [2024-10-07 07:49:25.820657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.873 qpair failed and we were unable to recover it. 00:30:21.873 [2024-10-07 07:49:25.830604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.873 [2024-10-07 07:49:25.830671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.873 [2024-10-07 07:49:25.830686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.873 [2024-10-07 07:49:25.830693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.873 [2024-10-07 07:49:25.830699] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:21.873 [2024-10-07 07:49:25.830714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.873 qpair failed and we were unable to recover it. 00:30:22.133 [2024-10-07 07:49:25.840625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.133 [2024-10-07 07:49:25.840693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.133 [2024-10-07 07:49:25.840707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.133 [2024-10-07 07:49:25.840713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.133 [2024-10-07 07:49:25.840719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.133 [2024-10-07 07:49:25.840734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.133 qpair failed and we were unable to recover it. 00:30:22.133 [2024-10-07 07:49:25.850626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.133 [2024-10-07 07:49:25.850697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.133 [2024-10-07 07:49:25.850711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.133 [2024-10-07 07:49:25.850717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.133 [2024-10-07 07:49:25.850723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.133 [2024-10-07 07:49:25.850738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.133 qpair failed and we were unable to recover it. 00:30:22.133 [2024-10-07 07:49:25.860722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.133 [2024-10-07 07:49:25.860822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.133 [2024-10-07 07:49:25.860836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.133 [2024-10-07 07:49:25.860843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.133 [2024-10-07 07:49:25.860849] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.133 [2024-10-07 07:49:25.860863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.133 qpair failed and we were unable to recover it. 00:30:22.133 [2024-10-07 07:49:25.870678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.133 [2024-10-07 07:49:25.870748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.133 [2024-10-07 07:49:25.870763] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.133 [2024-10-07 07:49:25.870770] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.133 [2024-10-07 07:49:25.870775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.133 [2024-10-07 07:49:25.870791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.133 qpair failed and we were unable to recover it. 00:30:22.133 [2024-10-07 07:49:25.880677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.133 [2024-10-07 07:49:25.880746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.133 [2024-10-07 07:49:25.880760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.133 [2024-10-07 07:49:25.880767] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.133 [2024-10-07 07:49:25.880773] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.133 [2024-10-07 07:49:25.880788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.133 qpair failed and we were unable to recover it. 00:30:22.133 [2024-10-07 07:49:25.890692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.134 [2024-10-07 07:49:25.890762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.134 [2024-10-07 07:49:25.890776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.134 [2024-10-07 07:49:25.890783] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.134 [2024-10-07 07:49:25.890789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.134 [2024-10-07 07:49:25.890803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-10-07 07:49:25.900788] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.134 [2024-10-07 07:49:25.900856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.134 [2024-10-07 07:49:25.900870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.134 [2024-10-07 07:49:25.900876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.134 [2024-10-07 07:49:25.900882] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.134 [2024-10-07 07:49:25.900897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-10-07 07:49:25.910811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.134 [2024-10-07 07:49:25.910880] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.134 [2024-10-07 07:49:25.910897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.134 [2024-10-07 07:49:25.910904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.134 [2024-10-07 07:49:25.910910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.134 [2024-10-07 07:49:25.910924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-10-07 07:49:25.920855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.134 [2024-10-07 07:49:25.920919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.134 [2024-10-07 07:49:25.920934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.134 [2024-10-07 07:49:25.920940] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.134 [2024-10-07 07:49:25.920946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.134 [2024-10-07 07:49:25.920960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-10-07 07:49:25.930801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.134 [2024-10-07 07:49:25.930873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.134 [2024-10-07 07:49:25.930889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.134 [2024-10-07 07:49:25.930896] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.134 [2024-10-07 07:49:25.930902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.134 [2024-10-07 07:49:25.930916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-10-07 07:49:25.940946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.134 [2024-10-07 07:49:25.941051] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.134 [2024-10-07 07:49:25.941116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.134 [2024-10-07 07:49:25.941124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.134 [2024-10-07 07:49:25.941130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.134 [2024-10-07 07:49:25.941146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-10-07 07:49:25.950971] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.134 [2024-10-07 07:49:25.951039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.134 [2024-10-07 07:49:25.951054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.134 [2024-10-07 07:49:25.951065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.134 [2024-10-07 07:49:25.951071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.134 [2024-10-07 07:49:25.951089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-10-07 07:49:25.960943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.134 [2024-10-07 07:49:25.961015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.134 [2024-10-07 07:49:25.961029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.134 [2024-10-07 07:49:25.961036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.134 [2024-10-07 07:49:25.961043] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.134 [2024-10-07 07:49:25.961062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-10-07 07:49:25.970991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.134 [2024-10-07 07:49:25.971066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.134 [2024-10-07 07:49:25.971082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.134 [2024-10-07 07:49:25.971091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.134 [2024-10-07 07:49:25.971097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.134 [2024-10-07 07:49:25.971112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-10-07 07:49:25.981066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.134 [2024-10-07 07:49:25.981172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.134 [2024-10-07 07:49:25.981186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.134 [2024-10-07 07:49:25.981193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.134 [2024-10-07 07:49:25.981198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.134 [2024-10-07 07:49:25.981213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-10-07 07:49:25.991069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.134 [2024-10-07 07:49:25.991137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.134 [2024-10-07 07:49:25.991151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.134 [2024-10-07 07:49:25.991157] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.134 [2024-10-07 07:49:25.991163] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.134 [2024-10-07 07:49:25.991178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-10-07 07:49:26.001088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.134 [2024-10-07 07:49:26.001159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.134 [2024-10-07 07:49:26.001176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.134 [2024-10-07 07:49:26.001183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.134 [2024-10-07 07:49:26.001189] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.134 [2024-10-07 07:49:26.001204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-10-07 07:49:26.011112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.134 [2024-10-07 07:49:26.011177] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.134 [2024-10-07 07:49:26.011192] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.134 [2024-10-07 07:49:26.011198] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.134 [2024-10-07 07:49:26.011204] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.134 [2024-10-07 07:49:26.011219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.134 qpair failed and we were unable to recover it. 00:30:22.134 [2024-10-07 07:49:26.021148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.134 [2024-10-07 07:49:26.021223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.134 [2024-10-07 07:49:26.021238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.134 [2024-10-07 07:49:26.021245] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.134 [2024-10-07 07:49:26.021251] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.135 [2024-10-07 07:49:26.021265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-10-07 07:49:26.031219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.135 [2024-10-07 07:49:26.031339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.135 [2024-10-07 07:49:26.031354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.135 [2024-10-07 07:49:26.031360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.135 [2024-10-07 07:49:26.031366] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.135 [2024-10-07 07:49:26.031383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-10-07 07:49:26.041221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.135 [2024-10-07 07:49:26.041309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.135 [2024-10-07 07:49:26.041322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.135 [2024-10-07 07:49:26.041329] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.135 [2024-10-07 07:49:26.041334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.135 [2024-10-07 07:49:26.041352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-10-07 07:49:26.051234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.135 [2024-10-07 07:49:26.051322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.135 [2024-10-07 07:49:26.051336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.135 [2024-10-07 07:49:26.051343] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.135 [2024-10-07 07:49:26.051349] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.135 [2024-10-07 07:49:26.051364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-10-07 07:49:26.061200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.135 [2024-10-07 07:49:26.061268] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.135 [2024-10-07 07:49:26.061282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.135 [2024-10-07 07:49:26.061289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.135 [2024-10-07 07:49:26.061295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.135 [2024-10-07 07:49:26.061310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-10-07 07:49:26.071314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.135 [2024-10-07 07:49:26.071387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.135 [2024-10-07 07:49:26.071402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.135 [2024-10-07 07:49:26.071408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.135 [2024-10-07 07:49:26.071414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.135 [2024-10-07 07:49:26.071429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-10-07 07:49:26.081358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.135 [2024-10-07 07:49:26.081437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.135 [2024-10-07 07:49:26.081450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.135 [2024-10-07 07:49:26.081457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.135 [2024-10-07 07:49:26.081463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.135 [2024-10-07 07:49:26.081477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-10-07 07:49:26.091342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.135 [2024-10-07 07:49:26.091409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.135 [2024-10-07 07:49:26.091426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.135 [2024-10-07 07:49:26.091433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.135 [2024-10-07 07:49:26.091439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.135 [2024-10-07 07:49:26.091454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.135 [2024-10-07 07:49:26.101384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.135 [2024-10-07 07:49:26.101461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.135 [2024-10-07 07:49:26.101474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.135 [2024-10-07 07:49:26.101481] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.135 [2024-10-07 07:49:26.101487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.135 [2024-10-07 07:49:26.101502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.135 qpair failed and we were unable to recover it. 00:30:22.396 [2024-10-07 07:49:26.111427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.396 [2024-10-07 07:49:26.111501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.396 [2024-10-07 07:49:26.111516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.396 [2024-10-07 07:49:26.111522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.396 [2024-10-07 07:49:26.111528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.396 [2024-10-07 07:49:26.111542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.396 qpair failed and we were unable to recover it. 00:30:22.396 [2024-10-07 07:49:26.121478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.396 [2024-10-07 07:49:26.121543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.396 [2024-10-07 07:49:26.121558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.396 [2024-10-07 07:49:26.121564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.396 [2024-10-07 07:49:26.121570] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.396 [2024-10-07 07:49:26.121585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.396 qpair failed and we were unable to recover it. 00:30:22.396 [2024-10-07 07:49:26.131409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.396 [2024-10-07 07:49:26.131482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.396 [2024-10-07 07:49:26.131497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.396 [2024-10-07 07:49:26.131503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.396 [2024-10-07 07:49:26.131512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.396 [2024-10-07 07:49:26.131528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.396 qpair failed and we were unable to recover it. 00:30:22.396 [2024-10-07 07:49:26.141502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.396 [2024-10-07 07:49:26.141574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.396 [2024-10-07 07:49:26.141589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.396 [2024-10-07 07:49:26.141595] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.396 [2024-10-07 07:49:26.141601] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.396 [2024-10-07 07:49:26.141616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.396 qpair failed and we were unable to recover it. 00:30:22.396 [2024-10-07 07:49:26.151526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.396 [2024-10-07 07:49:26.151592] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.396 [2024-10-07 07:49:26.151606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.396 [2024-10-07 07:49:26.151613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.396 [2024-10-07 07:49:26.151619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.396 [2024-10-07 07:49:26.151634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.396 qpair failed and we were unable to recover it. 00:30:22.396 [2024-10-07 07:49:26.161557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.396 [2024-10-07 07:49:26.161629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.396 [2024-10-07 07:49:26.161643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.396 [2024-10-07 07:49:26.161650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.396 [2024-10-07 07:49:26.161656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.396 [2024-10-07 07:49:26.161670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.396 qpair failed and we were unable to recover it. 00:30:22.396 [2024-10-07 07:49:26.171499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.396 [2024-10-07 07:49:26.171570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.396 [2024-10-07 07:49:26.171584] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.396 [2024-10-07 07:49:26.171591] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.396 [2024-10-07 07:49:26.171597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.396 [2024-10-07 07:49:26.171611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.396 qpair failed and we were unable to recover it. 00:30:22.396 [2024-10-07 07:49:26.181599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.396 [2024-10-07 07:49:26.181675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.396 [2024-10-07 07:49:26.181689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.396 [2024-10-07 07:49:26.181696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.396 [2024-10-07 07:49:26.181702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.396 [2024-10-07 07:49:26.181716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.396 qpair failed and we were unable to recover it. 00:30:22.396 [2024-10-07 07:49:26.191645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.396 [2024-10-07 07:49:26.191716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.396 [2024-10-07 07:49:26.191730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.396 [2024-10-07 07:49:26.191736] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.396 [2024-10-07 07:49:26.191742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.396 [2024-10-07 07:49:26.191757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.396 qpair failed and we were unable to recover it. 00:30:22.396 [2024-10-07 07:49:26.201651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.396 [2024-10-07 07:49:26.201714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.396 [2024-10-07 07:49:26.201729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.396 [2024-10-07 07:49:26.201735] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.396 [2024-10-07 07:49:26.201741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.396 [2024-10-07 07:49:26.201756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.396 qpair failed and we were unable to recover it. 00:30:22.396 [2024-10-07 07:49:26.211725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.396 [2024-10-07 07:49:26.211791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.396 [2024-10-07 07:49:26.211804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.396 [2024-10-07 07:49:26.211811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.396 [2024-10-07 07:49:26.211817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.396 [2024-10-07 07:49:26.211831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.396 qpair failed and we were unable to recover it. 00:30:22.396 [2024-10-07 07:49:26.221726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.396 [2024-10-07 07:49:26.221806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.396 [2024-10-07 07:49:26.221820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.396 [2024-10-07 07:49:26.221826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.396 [2024-10-07 07:49:26.221835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.396 [2024-10-07 07:49:26.221849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.396 qpair failed and we were unable to recover it. 00:30:22.396 [2024-10-07 07:49:26.231763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.396 [2024-10-07 07:49:26.231833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.396 [2024-10-07 07:49:26.231847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.396 [2024-10-07 07:49:26.231854] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.396 [2024-10-07 07:49:26.231859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.397 [2024-10-07 07:49:26.231874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.397 qpair failed and we were unable to recover it. 00:30:22.397 [2024-10-07 07:49:26.241786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.397 [2024-10-07 07:49:26.241853] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.397 [2024-10-07 07:49:26.241868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.397 [2024-10-07 07:49:26.241874] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.397 [2024-10-07 07:49:26.241880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.397 [2024-10-07 07:49:26.241895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.397 qpair failed and we were unable to recover it. 00:30:22.397 [2024-10-07 07:49:26.251811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.397 [2024-10-07 07:49:26.251879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.397 [2024-10-07 07:49:26.251893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.397 [2024-10-07 07:49:26.251899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.397 [2024-10-07 07:49:26.251905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.397 [2024-10-07 07:49:26.251920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.397 qpair failed and we were unable to recover it. 00:30:22.397 [2024-10-07 07:49:26.261842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.397 [2024-10-07 07:49:26.261908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.397 [2024-10-07 07:49:26.261922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.397 [2024-10-07 07:49:26.261929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.397 [2024-10-07 07:49:26.261935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.397 [2024-10-07 07:49:26.261950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.397 qpair failed and we were unable to recover it. 00:30:22.397 [2024-10-07 07:49:26.271920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.397 [2024-10-07 07:49:26.272000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.397 [2024-10-07 07:49:26.272014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.397 [2024-10-07 07:49:26.272020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.397 [2024-10-07 07:49:26.272026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.397 [2024-10-07 07:49:26.272041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.397 qpair failed and we were unable to recover it. 00:30:22.397 [2024-10-07 07:49:26.281891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.397 [2024-10-07 07:49:26.281971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.397 [2024-10-07 07:49:26.281986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.397 [2024-10-07 07:49:26.281992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.397 [2024-10-07 07:49:26.281998] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.397 [2024-10-07 07:49:26.282013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.397 qpair failed and we were unable to recover it. 00:30:22.397 [2024-10-07 07:49:26.291935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.397 [2024-10-07 07:49:26.292037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.397 [2024-10-07 07:49:26.292051] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.397 [2024-10-07 07:49:26.292063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.397 [2024-10-07 07:49:26.292069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.397 [2024-10-07 07:49:26.292084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.397 qpair failed and we were unable to recover it. 00:30:22.397 [2024-10-07 07:49:26.301961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.397 [2024-10-07 07:49:26.302033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.397 [2024-10-07 07:49:26.302047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.397 [2024-10-07 07:49:26.302054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.397 [2024-10-07 07:49:26.302064] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.397 [2024-10-07 07:49:26.302078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.397 qpair failed and we were unable to recover it. 00:30:22.397 [2024-10-07 07:49:26.311989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.397 [2024-10-07 07:49:26.312103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.397 [2024-10-07 07:49:26.312117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.397 [2024-10-07 07:49:26.312127] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.397 [2024-10-07 07:49:26.312133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.397 [2024-10-07 07:49:26.312148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.397 qpair failed and we were unable to recover it. 00:30:22.397 [2024-10-07 07:49:26.322035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.397 [2024-10-07 07:49:26.322109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.397 [2024-10-07 07:49:26.322123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.397 [2024-10-07 07:49:26.322130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.397 [2024-10-07 07:49:26.322135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.397 [2024-10-07 07:49:26.322150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.397 qpair failed and we were unable to recover it. 00:30:22.397 [2024-10-07 07:49:26.332079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.397 [2024-10-07 07:49:26.332149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.397 [2024-10-07 07:49:26.332163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.397 [2024-10-07 07:49:26.332170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.397 [2024-10-07 07:49:26.332175] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.397 [2024-10-07 07:49:26.332190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.397 qpair failed and we were unable to recover it. 00:30:22.397 [2024-10-07 07:49:26.342091] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.397 [2024-10-07 07:49:26.342203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.397 [2024-10-07 07:49:26.342218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.397 [2024-10-07 07:49:26.342225] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.397 [2024-10-07 07:49:26.342230] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.397 [2024-10-07 07:49:26.342246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.397 qpair failed and we were unable to recover it. 00:30:22.397 [2024-10-07 07:49:26.352032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.397 [2024-10-07 07:49:26.352103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.397 [2024-10-07 07:49:26.352117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.397 [2024-10-07 07:49:26.352124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.397 [2024-10-07 07:49:26.352130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.397 [2024-10-07 07:49:26.352144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.397 qpair failed and we were unable to recover it. 00:30:22.397 [2024-10-07 07:49:26.362150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.397 [2024-10-07 07:49:26.362263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.397 [2024-10-07 07:49:26.362278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.397 [2024-10-07 07:49:26.362284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.397 [2024-10-07 07:49:26.362291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.397 [2024-10-07 07:49:26.362305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.398 qpair failed and we were unable to recover it. 00:30:22.658 [2024-10-07 07:49:26.372177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.658 [2024-10-07 07:49:26.372266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.658 [2024-10-07 07:49:26.372280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.658 [2024-10-07 07:49:26.372287] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.658 [2024-10-07 07:49:26.372293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.658 [2024-10-07 07:49:26.372308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-10-07 07:49:26.382235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.658 [2024-10-07 07:49:26.382338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.658 [2024-10-07 07:49:26.382352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.658 [2024-10-07 07:49:26.382359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.658 [2024-10-07 07:49:26.382365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.658 [2024-10-07 07:49:26.382379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-10-07 07:49:26.392205] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.658 [2024-10-07 07:49:26.392273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.658 [2024-10-07 07:49:26.392288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.658 [2024-10-07 07:49:26.392294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.658 [2024-10-07 07:49:26.392300] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.658 [2024-10-07 07:49:26.392315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-10-07 07:49:26.402289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.658 [2024-10-07 07:49:26.402394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.658 [2024-10-07 07:49:26.402411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.658 [2024-10-07 07:49:26.402418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.658 [2024-10-07 07:49:26.402424] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.658 [2024-10-07 07:49:26.402439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-10-07 07:49:26.412324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.658 [2024-10-07 07:49:26.412438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.658 [2024-10-07 07:49:26.412453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.658 [2024-10-07 07:49:26.412459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.658 [2024-10-07 07:49:26.412466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.658 [2024-10-07 07:49:26.412481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-10-07 07:49:26.422314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.658 [2024-10-07 07:49:26.422375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.659 [2024-10-07 07:49:26.422390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.659 [2024-10-07 07:49:26.422397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.659 [2024-10-07 07:49:26.422402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.659 [2024-10-07 07:49:26.422417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-10-07 07:49:26.432335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.659 [2024-10-07 07:49:26.432402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.659 [2024-10-07 07:49:26.432416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.659 [2024-10-07 07:49:26.432422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.659 [2024-10-07 07:49:26.432428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.659 [2024-10-07 07:49:26.432442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-10-07 07:49:26.442362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.659 [2024-10-07 07:49:26.442442] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.659 [2024-10-07 07:49:26.442456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.659 [2024-10-07 07:49:26.442462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.659 [2024-10-07 07:49:26.442469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.659 [2024-10-07 07:49:26.442483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-10-07 07:49:26.452378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.659 [2024-10-07 07:49:26.452486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.659 [2024-10-07 07:49:26.452506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.659 [2024-10-07 07:49:26.452513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.659 [2024-10-07 07:49:26.452519] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.659 [2024-10-07 07:49:26.452535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-10-07 07:49:26.462416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.659 [2024-10-07 07:49:26.462507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.659 [2024-10-07 07:49:26.462520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.659 [2024-10-07 07:49:26.462527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.659 [2024-10-07 07:49:26.462533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.659 [2024-10-07 07:49:26.462548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-10-07 07:49:26.472467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.659 [2024-10-07 07:49:26.472536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.659 [2024-10-07 07:49:26.472550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.659 [2024-10-07 07:49:26.472557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.659 [2024-10-07 07:49:26.472562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.659 [2024-10-07 07:49:26.472577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-10-07 07:49:26.482431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.659 [2024-10-07 07:49:26.482510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.659 [2024-10-07 07:49:26.482524] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.659 [2024-10-07 07:49:26.482530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.659 [2024-10-07 07:49:26.482536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.659 [2024-10-07 07:49:26.482550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-10-07 07:49:26.492550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.659 [2024-10-07 07:49:26.492614] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.659 [2024-10-07 07:49:26.492631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.659 [2024-10-07 07:49:26.492638] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.659 [2024-10-07 07:49:26.492644] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.659 [2024-10-07 07:49:26.492659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-10-07 07:49:26.502535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.659 [2024-10-07 07:49:26.502634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.659 [2024-10-07 07:49:26.502648] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.659 [2024-10-07 07:49:26.502655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.659 [2024-10-07 07:49:26.502661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.659 [2024-10-07 07:49:26.502675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-10-07 07:49:26.512560] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.659 [2024-10-07 07:49:26.512631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.659 [2024-10-07 07:49:26.512645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.659 [2024-10-07 07:49:26.512652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.659 [2024-10-07 07:49:26.512658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.659 [2024-10-07 07:49:26.512672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-10-07 07:49:26.522581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.659 [2024-10-07 07:49:26.522657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.659 [2024-10-07 07:49:26.522671] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.659 [2024-10-07 07:49:26.522677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.659 [2024-10-07 07:49:26.522683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.659 [2024-10-07 07:49:26.522697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-10-07 07:49:26.532612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.659 [2024-10-07 07:49:26.532727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.659 [2024-10-07 07:49:26.532743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.659 [2024-10-07 07:49:26.532749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.659 [2024-10-07 07:49:26.532756] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.659 [2024-10-07 07:49:26.532774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-10-07 07:49:26.542631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.659 [2024-10-07 07:49:26.542700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.659 [2024-10-07 07:49:26.542714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.659 [2024-10-07 07:49:26.542721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.659 [2024-10-07 07:49:26.542727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.659 [2024-10-07 07:49:26.542742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-10-07 07:49:26.552666] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.659 [2024-10-07 07:49:26.552734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.659 [2024-10-07 07:49:26.552748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.659 [2024-10-07 07:49:26.552755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.659 [2024-10-07 07:49:26.552760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.660 [2024-10-07 07:49:26.552775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-10-07 07:49:26.562689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.660 [2024-10-07 07:49:26.562788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.660 [2024-10-07 07:49:26.562801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.660 [2024-10-07 07:49:26.562808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.660 [2024-10-07 07:49:26.562813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.660 [2024-10-07 07:49:26.562828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-10-07 07:49:26.572739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.660 [2024-10-07 07:49:26.572850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.660 [2024-10-07 07:49:26.572865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.660 [2024-10-07 07:49:26.572872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.660 [2024-10-07 07:49:26.572878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.660 [2024-10-07 07:49:26.572893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-10-07 07:49:26.582750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.660 [2024-10-07 07:49:26.582814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.660 [2024-10-07 07:49:26.582831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.660 [2024-10-07 07:49:26.582838] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.660 [2024-10-07 07:49:26.582844] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.660 [2024-10-07 07:49:26.582859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-10-07 07:49:26.592803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.660 [2024-10-07 07:49:26.592872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.660 [2024-10-07 07:49:26.592886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.660 [2024-10-07 07:49:26.592893] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.660 [2024-10-07 07:49:26.592899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.660 [2024-10-07 07:49:26.592913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-10-07 07:49:26.602805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.660 [2024-10-07 07:49:26.602876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.660 [2024-10-07 07:49:26.602891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.660 [2024-10-07 07:49:26.602897] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.660 [2024-10-07 07:49:26.602903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.660 [2024-10-07 07:49:26.602918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-10-07 07:49:26.612845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.660 [2024-10-07 07:49:26.612912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.660 [2024-10-07 07:49:26.612926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.660 [2024-10-07 07:49:26.612933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.660 [2024-10-07 07:49:26.612938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.660 [2024-10-07 07:49:26.612953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-10-07 07:49:26.622869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.660 [2024-10-07 07:49:26.622938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.660 [2024-10-07 07:49:26.622952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.660 [2024-10-07 07:49:26.622959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.660 [2024-10-07 07:49:26.622968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.660 [2024-10-07 07:49:26.622982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.920 [2024-10-07 07:49:26.632852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.921 [2024-10-07 07:49:26.632933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.921 [2024-10-07 07:49:26.632947] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.921 [2024-10-07 07:49:26.632953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.921 [2024-10-07 07:49:26.632959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.921 [2024-10-07 07:49:26.632974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.921 qpair failed and we were unable to recover it. 00:30:22.921 [2024-10-07 07:49:26.642940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.921 [2024-10-07 07:49:26.643051] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.921 [2024-10-07 07:49:26.643069] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.921 [2024-10-07 07:49:26.643076] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.921 [2024-10-07 07:49:26.643081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.921 [2024-10-07 07:49:26.643096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.921 qpair failed and we were unable to recover it. 00:30:22.921 [2024-10-07 07:49:26.652907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.921 [2024-10-07 07:49:26.652979] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.921 [2024-10-07 07:49:26.652996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.921 [2024-10-07 07:49:26.653003] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.921 [2024-10-07 07:49:26.653009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.921 [2024-10-07 07:49:26.653024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.921 qpair failed and we were unable to recover it. 00:30:22.921 [2024-10-07 07:49:26.663045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.921 [2024-10-07 07:49:26.663117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.921 [2024-10-07 07:49:26.663133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.921 [2024-10-07 07:49:26.663140] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.921 [2024-10-07 07:49:26.663146] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.921 [2024-10-07 07:49:26.663162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.921 qpair failed and we were unable to recover it. 00:30:22.921 [2024-10-07 07:49:26.673038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.921 [2024-10-07 07:49:26.673122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.921 [2024-10-07 07:49:26.673137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.921 [2024-10-07 07:49:26.673144] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.921 [2024-10-07 07:49:26.673151] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.921 [2024-10-07 07:49:26.673166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.921 qpair failed and we were unable to recover it. 00:30:22.921 [2024-10-07 07:49:26.683048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.921 [2024-10-07 07:49:26.683127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.921 [2024-10-07 07:49:26.683142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.921 [2024-10-07 07:49:26.683149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.921 [2024-10-07 07:49:26.683155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.921 [2024-10-07 07:49:26.683170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.921 qpair failed and we were unable to recover it. 00:30:22.921 [2024-10-07 07:49:26.693075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.921 [2024-10-07 07:49:26.693147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.921 [2024-10-07 07:49:26.693161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.921 [2024-10-07 07:49:26.693168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.921 [2024-10-07 07:49:26.693174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.921 [2024-10-07 07:49:26.693189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.921 qpair failed and we were unable to recover it. 00:30:22.921 [2024-10-07 07:49:26.703097] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.921 [2024-10-07 07:49:26.703174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.921 [2024-10-07 07:49:26.703188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.921 [2024-10-07 07:49:26.703196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.921 [2024-10-07 07:49:26.703201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.921 [2024-10-07 07:49:26.703216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.921 qpair failed and we were unable to recover it. 00:30:22.921 [2024-10-07 07:49:26.713142] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.921 [2024-10-07 07:49:26.713239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.921 [2024-10-07 07:49:26.713253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.921 [2024-10-07 07:49:26.713260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.921 [2024-10-07 07:49:26.713269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.921 [2024-10-07 07:49:26.713285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.921 qpair failed and we were unable to recover it. 00:30:22.921 [2024-10-07 07:49:26.723119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.921 [2024-10-07 07:49:26.723189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.921 [2024-10-07 07:49:26.723204] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.921 [2024-10-07 07:49:26.723211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.921 [2024-10-07 07:49:26.723217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.921 [2024-10-07 07:49:26.723232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.921 qpair failed and we were unable to recover it. 00:30:22.921 [2024-10-07 07:49:26.733221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.921 [2024-10-07 07:49:26.733321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.921 [2024-10-07 07:49:26.733335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.921 [2024-10-07 07:49:26.733342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.921 [2024-10-07 07:49:26.733348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.921 [2024-10-07 07:49:26.733364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.921 qpair failed and we were unable to recover it. 00:30:22.921 [2024-10-07 07:49:26.743232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.921 [2024-10-07 07:49:26.743336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.921 [2024-10-07 07:49:26.743350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.921 [2024-10-07 07:49:26.743358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.921 [2024-10-07 07:49:26.743365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.921 [2024-10-07 07:49:26.743381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.921 qpair failed and we were unable to recover it. 00:30:22.921 [2024-10-07 07:49:26.753266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.921 [2024-10-07 07:49:26.753334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.921 [2024-10-07 07:49:26.753350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.921 [2024-10-07 07:49:26.753359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.921 [2024-10-07 07:49:26.753366] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.921 [2024-10-07 07:49:26.753381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.921 qpair failed and we were unable to recover it. 00:30:22.921 [2024-10-07 07:49:26.763281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.921 [2024-10-07 07:49:26.763346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.922 [2024-10-07 07:49:26.763361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.922 [2024-10-07 07:49:26.763368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.922 [2024-10-07 07:49:26.763374] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.922 [2024-10-07 07:49:26.763389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.922 qpair failed and we were unable to recover it. 00:30:22.922 [2024-10-07 07:49:26.773315] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.922 [2024-10-07 07:49:26.773383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.922 [2024-10-07 07:49:26.773399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.922 [2024-10-07 07:49:26.773406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.922 [2024-10-07 07:49:26.773413] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.922 [2024-10-07 07:49:26.773428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.922 qpair failed and we were unable to recover it. 00:30:22.922 [2024-10-07 07:49:26.783263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.922 [2024-10-07 07:49:26.783338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.922 [2024-10-07 07:49:26.783353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.922 [2024-10-07 07:49:26.783360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.922 [2024-10-07 07:49:26.783367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.922 [2024-10-07 07:49:26.783382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.922 qpair failed and we were unable to recover it. 00:30:22.922 [2024-10-07 07:49:26.793375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.922 [2024-10-07 07:49:26.793456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.922 [2024-10-07 07:49:26.793471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.922 [2024-10-07 07:49:26.793478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.922 [2024-10-07 07:49:26.793485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.922 [2024-10-07 07:49:26.793500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.922 qpair failed and we were unable to recover it. 00:30:22.922 [2024-10-07 07:49:26.803400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.922 [2024-10-07 07:49:26.803491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.922 [2024-10-07 07:49:26.803505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.922 [2024-10-07 07:49:26.803516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.922 [2024-10-07 07:49:26.803522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.922 [2024-10-07 07:49:26.803537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.922 qpair failed and we were unable to recover it. 00:30:22.922 [2024-10-07 07:49:26.813441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.922 [2024-10-07 07:49:26.813559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.922 [2024-10-07 07:49:26.813574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.922 [2024-10-07 07:49:26.813581] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.922 [2024-10-07 07:49:26.813587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.922 [2024-10-07 07:49:26.813603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.922 qpair failed and we were unable to recover it. 00:30:22.922 [2024-10-07 07:49:26.823483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.922 [2024-10-07 07:49:26.823553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.922 [2024-10-07 07:49:26.823567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.922 [2024-10-07 07:49:26.823574] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.922 [2024-10-07 07:49:26.823580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.922 [2024-10-07 07:49:26.823595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.922 qpair failed and we were unable to recover it. 00:30:22.922 [2024-10-07 07:49:26.833491] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.922 [2024-10-07 07:49:26.833561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.922 [2024-10-07 07:49:26.833575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.922 [2024-10-07 07:49:26.833582] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.922 [2024-10-07 07:49:26.833588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.922 [2024-10-07 07:49:26.833603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.922 qpair failed and we were unable to recover it. 00:30:22.922 [2024-10-07 07:49:26.843554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.922 [2024-10-07 07:49:26.843656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.922 [2024-10-07 07:49:26.843671] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.922 [2024-10-07 07:49:26.843678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.922 [2024-10-07 07:49:26.843685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.922 [2024-10-07 07:49:26.843700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.922 qpair failed and we were unable to recover it. 00:30:22.922 [2024-10-07 07:49:26.853541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.922 [2024-10-07 07:49:26.853611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.922 [2024-10-07 07:49:26.853627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.922 [2024-10-07 07:49:26.853634] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.922 [2024-10-07 07:49:26.853641] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.922 [2024-10-07 07:49:26.853656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.922 qpair failed and we were unable to recover it. 00:30:22.922 [2024-10-07 07:49:26.863575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.922 [2024-10-07 07:49:26.863646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.922 [2024-10-07 07:49:26.863661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.922 [2024-10-07 07:49:26.863668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.922 [2024-10-07 07:49:26.863674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.922 [2024-10-07 07:49:26.863689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.922 qpair failed and we were unable to recover it. 00:30:22.922 [2024-10-07 07:49:26.873629] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.922 [2024-10-07 07:49:26.873700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.922 [2024-10-07 07:49:26.873714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.922 [2024-10-07 07:49:26.873722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.922 [2024-10-07 07:49:26.873728] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.922 [2024-10-07 07:49:26.873743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.922 qpair failed and we were unable to recover it. 00:30:22.922 [2024-10-07 07:49:26.883619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.922 [2024-10-07 07:49:26.883697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.922 [2024-10-07 07:49:26.883712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.922 [2024-10-07 07:49:26.883719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.922 [2024-10-07 07:49:26.883726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:22.922 [2024-10-07 07:49:26.883741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.922 qpair failed and we were unable to recover it. 00:30:23.183 [2024-10-07 07:49:26.893658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.183 [2024-10-07 07:49:26.893730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.183 [2024-10-07 07:49:26.893745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.183 [2024-10-07 07:49:26.893755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.183 [2024-10-07 07:49:26.893761] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.183 [2024-10-07 07:49:26.893776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.183 qpair failed and we were unable to recover it. 00:30:23.184 [2024-10-07 07:49:26.903668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.184 [2024-10-07 07:49:26.903741] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.184 [2024-10-07 07:49:26.903755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.184 [2024-10-07 07:49:26.903763] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.184 [2024-10-07 07:49:26.903769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.184 [2024-10-07 07:49:26.903784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.184 qpair failed and we were unable to recover it. 00:30:23.184 [2024-10-07 07:49:26.913742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.184 [2024-10-07 07:49:26.913830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.184 [2024-10-07 07:49:26.913844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.184 [2024-10-07 07:49:26.913851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.184 [2024-10-07 07:49:26.913858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.184 [2024-10-07 07:49:26.913873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.184 qpair failed and we were unable to recover it. 00:30:23.184 [2024-10-07 07:49:26.923721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.184 [2024-10-07 07:49:26.923835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.184 [2024-10-07 07:49:26.923851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.184 [2024-10-07 07:49:26.923858] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.184 [2024-10-07 07:49:26.923864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.184 [2024-10-07 07:49:26.923879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.184 qpair failed and we were unable to recover it. 00:30:23.184 [2024-10-07 07:49:26.933761] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.184 [2024-10-07 07:49:26.933833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.184 [2024-10-07 07:49:26.933848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.184 [2024-10-07 07:49:26.933855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.184 [2024-10-07 07:49:26.933861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.184 [2024-10-07 07:49:26.933876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.184 qpair failed and we were unable to recover it. 00:30:23.184 [2024-10-07 07:49:26.943794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.184 [2024-10-07 07:49:26.943872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.184 [2024-10-07 07:49:26.943888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.184 [2024-10-07 07:49:26.943895] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.184 [2024-10-07 07:49:26.943902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.184 [2024-10-07 07:49:26.943917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.184 qpair failed and we were unable to recover it. 00:30:23.184 [2024-10-07 07:49:26.953763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.184 [2024-10-07 07:49:26.953833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.184 [2024-10-07 07:49:26.953847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.184 [2024-10-07 07:49:26.953854] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.184 [2024-10-07 07:49:26.953861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.184 [2024-10-07 07:49:26.953876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.184 qpair failed and we were unable to recover it. 00:30:23.184 [2024-10-07 07:49:26.963872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.184 [2024-10-07 07:49:26.963957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.184 [2024-10-07 07:49:26.963971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.184 [2024-10-07 07:49:26.963978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.184 [2024-10-07 07:49:26.963984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.184 [2024-10-07 07:49:26.964000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.184 qpair failed and we were unable to recover it. 00:30:23.184 [2024-10-07 07:49:26.973884] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.184 [2024-10-07 07:49:26.973956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.184 [2024-10-07 07:49:26.973970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.184 [2024-10-07 07:49:26.973977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.184 [2024-10-07 07:49:26.973984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.184 [2024-10-07 07:49:26.973999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.184 qpair failed and we were unable to recover it. 00:30:23.184 [2024-10-07 07:49:26.983872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.184 [2024-10-07 07:49:26.983942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.184 [2024-10-07 07:49:26.983962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.184 [2024-10-07 07:49:26.983969] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.184 [2024-10-07 07:49:26.983975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.184 [2024-10-07 07:49:26.983990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.184 qpair failed and we were unable to recover it. 00:30:23.184 [2024-10-07 07:49:26.993892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.184 [2024-10-07 07:49:26.993960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.184 [2024-10-07 07:49:26.993975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.184 [2024-10-07 07:49:26.993982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.184 [2024-10-07 07:49:26.993988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.184 [2024-10-07 07:49:26.994004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.184 qpair failed and we were unable to recover it. 00:30:23.184 [2024-10-07 07:49:27.003944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.184 [2024-10-07 07:49:27.004024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.184 [2024-10-07 07:49:27.004039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.184 [2024-10-07 07:49:27.004047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.184 [2024-10-07 07:49:27.004054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.184 [2024-10-07 07:49:27.004074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.184 qpair failed and we were unable to recover it. 00:30:23.184 [2024-10-07 07:49:27.013920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.184 [2024-10-07 07:49:27.013994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.184 [2024-10-07 07:49:27.014010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.184 [2024-10-07 07:49:27.014018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.184 [2024-10-07 07:49:27.014025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.184 [2024-10-07 07:49:27.014040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.184 qpair failed and we were unable to recover it. 00:30:23.184 [2024-10-07 07:49:27.023994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.184 [2024-10-07 07:49:27.024065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.184 [2024-10-07 07:49:27.024080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.184 [2024-10-07 07:49:27.024088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.184 [2024-10-07 07:49:27.024095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.184 [2024-10-07 07:49:27.024115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.184 qpair failed and we were unable to recover it. 00:30:23.184 [2024-10-07 07:49:27.034017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.184 [2024-10-07 07:49:27.034092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.185 [2024-10-07 07:49:27.034107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.185 [2024-10-07 07:49:27.034114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.185 [2024-10-07 07:49:27.034121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.185 [2024-10-07 07:49:27.034137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.185 qpair failed and we were unable to recover it. 00:30:23.185 [2024-10-07 07:49:27.044004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.185 [2024-10-07 07:49:27.044080] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.185 [2024-10-07 07:49:27.044094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.185 [2024-10-07 07:49:27.044102] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.185 [2024-10-07 07:49:27.044108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.185 [2024-10-07 07:49:27.044123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.185 qpair failed and we were unable to recover it. 00:30:23.185 [2024-10-07 07:49:27.054112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.185 [2024-10-07 07:49:27.054195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.185 [2024-10-07 07:49:27.054211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.185 [2024-10-07 07:49:27.054218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.185 [2024-10-07 07:49:27.054225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.185 [2024-10-07 07:49:27.054240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.185 qpair failed and we were unable to recover it. 00:30:23.185 [2024-10-07 07:49:27.064106] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.185 [2024-10-07 07:49:27.064172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.185 [2024-10-07 07:49:27.064187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.185 [2024-10-07 07:49:27.064194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.185 [2024-10-07 07:49:27.064202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.185 [2024-10-07 07:49:27.064217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.185 qpair failed and we were unable to recover it. 00:30:23.185 [2024-10-07 07:49:27.074187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.185 [2024-10-07 07:49:27.074257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.185 [2024-10-07 07:49:27.074274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.185 [2024-10-07 07:49:27.074282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.185 [2024-10-07 07:49:27.074288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.185 [2024-10-07 07:49:27.074303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.185 qpair failed and we were unable to recover it. 00:30:23.185 [2024-10-07 07:49:27.084122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.185 [2024-10-07 07:49:27.084211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.185 [2024-10-07 07:49:27.084225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.185 [2024-10-07 07:49:27.084233] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.185 [2024-10-07 07:49:27.084239] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.185 [2024-10-07 07:49:27.084254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.185 qpair failed and we were unable to recover it. 00:30:23.185 [2024-10-07 07:49:27.094202] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.185 [2024-10-07 07:49:27.094273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.185 [2024-10-07 07:49:27.094287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.185 [2024-10-07 07:49:27.094295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.185 [2024-10-07 07:49:27.094301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.185 [2024-10-07 07:49:27.094316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.185 qpair failed and we were unable to recover it. 00:30:23.185 [2024-10-07 07:49:27.104267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.185 [2024-10-07 07:49:27.104370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.185 [2024-10-07 07:49:27.104386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.185 [2024-10-07 07:49:27.104394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.185 [2024-10-07 07:49:27.104400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.185 [2024-10-07 07:49:27.104416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.185 qpair failed and we were unable to recover it. 00:30:23.185 [2024-10-07 07:49:27.114292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.185 [2024-10-07 07:49:27.114372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.185 [2024-10-07 07:49:27.114387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.185 [2024-10-07 07:49:27.114394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.185 [2024-10-07 07:49:27.114401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.185 [2024-10-07 07:49:27.114419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.185 qpair failed and we were unable to recover it. 00:30:23.185 [2024-10-07 07:49:27.124279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.185 [2024-10-07 07:49:27.124360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.185 [2024-10-07 07:49:27.124376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.185 [2024-10-07 07:49:27.124383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.185 [2024-10-07 07:49:27.124389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.185 [2024-10-07 07:49:27.124404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.185 qpair failed and we were unable to recover it. 00:30:23.185 [2024-10-07 07:49:27.134366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.185 [2024-10-07 07:49:27.134444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.185 [2024-10-07 07:49:27.134460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.185 [2024-10-07 07:49:27.134468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.185 [2024-10-07 07:49:27.134475] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.185 [2024-10-07 07:49:27.134490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.185 qpair failed and we were unable to recover it. 00:30:23.185 [2024-10-07 07:49:27.144367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.185 [2024-10-07 07:49:27.144433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.185 [2024-10-07 07:49:27.144447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.185 [2024-10-07 07:49:27.144454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.185 [2024-10-07 07:49:27.144461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.185 [2024-10-07 07:49:27.144476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.185 qpair failed and we were unable to recover it. 00:30:23.446 [2024-10-07 07:49:27.154351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.446 [2024-10-07 07:49:27.154419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.446 [2024-10-07 07:49:27.154433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.446 [2024-10-07 07:49:27.154441] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.446 [2024-10-07 07:49:27.154447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.446 [2024-10-07 07:49:27.154462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.446 qpair failed and we were unable to recover it. 00:30:23.446 [2024-10-07 07:49:27.164403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.446 [2024-10-07 07:49:27.164473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.446 [2024-10-07 07:49:27.164488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.446 [2024-10-07 07:49:27.164495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.446 [2024-10-07 07:49:27.164502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.446 [2024-10-07 07:49:27.164517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.446 qpair failed and we were unable to recover it. 00:30:23.446 [2024-10-07 07:49:27.174518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.446 [2024-10-07 07:49:27.174594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.446 [2024-10-07 07:49:27.174609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.446 [2024-10-07 07:49:27.174618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.446 [2024-10-07 07:49:27.174625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.446 [2024-10-07 07:49:27.174641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.446 qpair failed and we were unable to recover it. 00:30:23.446 [2024-10-07 07:49:27.184445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.446 [2024-10-07 07:49:27.184527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.446 [2024-10-07 07:49:27.184544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.446 [2024-10-07 07:49:27.184551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.446 [2024-10-07 07:49:27.184557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.447 [2024-10-07 07:49:27.184573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.447 qpair failed and we were unable to recover it. 00:30:23.447 [2024-10-07 07:49:27.194496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.447 [2024-10-07 07:49:27.194579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.447 [2024-10-07 07:49:27.194594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.447 [2024-10-07 07:49:27.194602] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.447 [2024-10-07 07:49:27.194608] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.447 [2024-10-07 07:49:27.194623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.447 qpair failed and we were unable to recover it. 00:30:23.447 [2024-10-07 07:49:27.204470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.447 [2024-10-07 07:49:27.204566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.447 [2024-10-07 07:49:27.204581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.447 [2024-10-07 07:49:27.204588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.447 [2024-10-07 07:49:27.204598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.447 [2024-10-07 07:49:27.204613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.447 qpair failed and we were unable to recover it. 00:30:23.447 [2024-10-07 07:49:27.214504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.447 [2024-10-07 07:49:27.214575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.447 [2024-10-07 07:49:27.214590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.447 [2024-10-07 07:49:27.214598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.447 [2024-10-07 07:49:27.214604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.447 [2024-10-07 07:49:27.214619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.447 qpair failed and we were unable to recover it. 00:30:23.447 [2024-10-07 07:49:27.224573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.447 [2024-10-07 07:49:27.224645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.447 [2024-10-07 07:49:27.224659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.447 [2024-10-07 07:49:27.224667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.447 [2024-10-07 07:49:27.224673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.447 [2024-10-07 07:49:27.224689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.447 qpair failed and we were unable to recover it. 00:30:23.447 [2024-10-07 07:49:27.234577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.447 [2024-10-07 07:49:27.234655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.447 [2024-10-07 07:49:27.234670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.447 [2024-10-07 07:49:27.234677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.447 [2024-10-07 07:49:27.234683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.447 [2024-10-07 07:49:27.234698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.447 qpair failed and we were unable to recover it. 00:30:23.447 [2024-10-07 07:49:27.244658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.447 [2024-10-07 07:49:27.244733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.447 [2024-10-07 07:49:27.244747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.447 [2024-10-07 07:49:27.244754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.447 [2024-10-07 07:49:27.244760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.447 [2024-10-07 07:49:27.244776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.447 qpair failed and we were unable to recover it. 00:30:23.447 [2024-10-07 07:49:27.254677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.447 [2024-10-07 07:49:27.254760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.447 [2024-10-07 07:49:27.254775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.447 [2024-10-07 07:49:27.254782] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.447 [2024-10-07 07:49:27.254788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.447 [2024-10-07 07:49:27.254802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.447 qpair failed and we were unable to recover it. 00:30:23.447 [2024-10-07 07:49:27.264678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.447 [2024-10-07 07:49:27.264752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.447 [2024-10-07 07:49:27.264767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.447 [2024-10-07 07:49:27.264774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.447 [2024-10-07 07:49:27.264780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.447 [2024-10-07 07:49:27.264795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.447 qpair failed and we were unable to recover it. 00:30:23.447 [2024-10-07 07:49:27.274669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.447 [2024-10-07 07:49:27.274742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.447 [2024-10-07 07:49:27.274756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.447 [2024-10-07 07:49:27.274764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.447 [2024-10-07 07:49:27.274770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.447 [2024-10-07 07:49:27.274785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.447 qpair failed and we were unable to recover it. 00:30:23.447 [2024-10-07 07:49:27.284681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.447 [2024-10-07 07:49:27.284754] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.447 [2024-10-07 07:49:27.284768] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.447 [2024-10-07 07:49:27.284775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.447 [2024-10-07 07:49:27.284782] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.447 [2024-10-07 07:49:27.284796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.447 qpair failed and we were unable to recover it. 00:30:23.447 [2024-10-07 07:49:27.294813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.447 [2024-10-07 07:49:27.294883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.447 [2024-10-07 07:49:27.294899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.447 [2024-10-07 07:49:27.294911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.447 [2024-10-07 07:49:27.294918] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.447 [2024-10-07 07:49:27.294933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.447 qpair failed and we were unable to recover it. 00:30:23.447 [2024-10-07 07:49:27.304830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.447 [2024-10-07 07:49:27.304893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.447 [2024-10-07 07:49:27.304908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.447 [2024-10-07 07:49:27.304915] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.447 [2024-10-07 07:49:27.304921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.447 [2024-10-07 07:49:27.304936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.447 qpair failed and we were unable to recover it. 00:30:23.447 [2024-10-07 07:49:27.314847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.447 [2024-10-07 07:49:27.314916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.447 [2024-10-07 07:49:27.314931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.447 [2024-10-07 07:49:27.314938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.447 [2024-10-07 07:49:27.314945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.447 [2024-10-07 07:49:27.314960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.447 qpair failed and we were unable to recover it. 00:30:23.447 [2024-10-07 07:49:27.324918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.448 [2024-10-07 07:49:27.324988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.448 [2024-10-07 07:49:27.325003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.448 [2024-10-07 07:49:27.325010] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.448 [2024-10-07 07:49:27.325017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.448 [2024-10-07 07:49:27.325033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.448 qpair failed and we were unable to recover it. 00:30:23.448 [2024-10-07 07:49:27.334961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.448 [2024-10-07 07:49:27.335088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.448 [2024-10-07 07:49:27.335103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.448 [2024-10-07 07:49:27.335110] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.448 [2024-10-07 07:49:27.335117] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.448 [2024-10-07 07:49:27.335133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.448 qpair failed and we were unable to recover it. 00:30:23.448 [2024-10-07 07:49:27.344938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.448 [2024-10-07 07:49:27.345053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.448 [2024-10-07 07:49:27.345071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.448 [2024-10-07 07:49:27.345079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.448 [2024-10-07 07:49:27.345086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.448 [2024-10-07 07:49:27.345103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.448 qpair failed and we were unable to recover it. 00:30:23.448 [2024-10-07 07:49:27.354927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.448 [2024-10-07 07:49:27.354998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.448 [2024-10-07 07:49:27.355014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.448 [2024-10-07 07:49:27.355021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.448 [2024-10-07 07:49:27.355027] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.448 [2024-10-07 07:49:27.355043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.448 qpair failed and we were unable to recover it. 00:30:23.448 [2024-10-07 07:49:27.365011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.448 [2024-10-07 07:49:27.365087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.448 [2024-10-07 07:49:27.365102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.448 [2024-10-07 07:49:27.365109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.448 [2024-10-07 07:49:27.365115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.448 [2024-10-07 07:49:27.365131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.448 qpair failed and we were unable to recover it. 00:30:23.448 [2024-10-07 07:49:27.374946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.448 [2024-10-07 07:49:27.375022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.448 [2024-10-07 07:49:27.375037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.448 [2024-10-07 07:49:27.375044] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.448 [2024-10-07 07:49:27.375051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.448 [2024-10-07 07:49:27.375070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.448 qpair failed and we were unable to recover it. 00:30:23.448 [2024-10-07 07:49:27.385062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.448 [2024-10-07 07:49:27.385181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.448 [2024-10-07 07:49:27.385196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.448 [2024-10-07 07:49:27.385206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.448 [2024-10-07 07:49:27.385213] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.448 [2024-10-07 07:49:27.385229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.448 qpair failed and we were unable to recover it. 00:30:23.448 [2024-10-07 07:49:27.395043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.448 [2024-10-07 07:49:27.395136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.448 [2024-10-07 07:49:27.395151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.448 [2024-10-07 07:49:27.395158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.448 [2024-10-07 07:49:27.395164] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.448 [2024-10-07 07:49:27.395180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.448 qpair failed and we were unable to recover it. 00:30:23.448 [2024-10-07 07:49:27.405105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.448 [2024-10-07 07:49:27.405172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.448 [2024-10-07 07:49:27.405188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.448 [2024-10-07 07:49:27.405195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.448 [2024-10-07 07:49:27.405201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.448 [2024-10-07 07:49:27.405217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.448 qpair failed and we were unable to recover it. 00:30:23.448 [2024-10-07 07:49:27.415095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.448 [2024-10-07 07:49:27.415167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.448 [2024-10-07 07:49:27.415182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.448 [2024-10-07 07:49:27.415189] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.448 [2024-10-07 07:49:27.415195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.448 [2024-10-07 07:49:27.415210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.448 qpair failed and we were unable to recover it. 00:30:23.709 [2024-10-07 07:49:27.425174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.709 [2024-10-07 07:49:27.425282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.709 [2024-10-07 07:49:27.425295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.709 [2024-10-07 07:49:27.425302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.709 [2024-10-07 07:49:27.425309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.709 [2024-10-07 07:49:27.425325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.709 qpair failed and we were unable to recover it. 00:30:23.709 [2024-10-07 07:49:27.435182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.709 [2024-10-07 07:49:27.435253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.709 [2024-10-07 07:49:27.435269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.709 [2024-10-07 07:49:27.435276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.709 [2024-10-07 07:49:27.435282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.709 [2024-10-07 07:49:27.435297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.709 qpair failed and we were unable to recover it. 00:30:23.709 [2024-10-07 07:49:27.445249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.709 [2024-10-07 07:49:27.445323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.709 [2024-10-07 07:49:27.445338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.709 [2024-10-07 07:49:27.445345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.709 [2024-10-07 07:49:27.445351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.709 [2024-10-07 07:49:27.445367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.709 qpair failed and we were unable to recover it. 00:30:23.709 [2024-10-07 07:49:27.455247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.709 [2024-10-07 07:49:27.455315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.709 [2024-10-07 07:49:27.455330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.709 [2024-10-07 07:49:27.455336] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.709 [2024-10-07 07:49:27.455343] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.709 [2024-10-07 07:49:27.455358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.709 qpair failed and we were unable to recover it. 00:30:23.709 [2024-10-07 07:49:27.465286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.709 [2024-10-07 07:49:27.465362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.709 [2024-10-07 07:49:27.465377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.709 [2024-10-07 07:49:27.465384] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.709 [2024-10-07 07:49:27.465391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.709 [2024-10-07 07:49:27.465406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.709 qpair failed and we were unable to recover it. 00:30:23.709 [2024-10-07 07:49:27.475296] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.709 [2024-10-07 07:49:27.475366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.709 [2024-10-07 07:49:27.475384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.709 [2024-10-07 07:49:27.475392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.709 [2024-10-07 07:49:27.475398] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.709 [2024-10-07 07:49:27.475414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.709 qpair failed and we were unable to recover it. 00:30:23.709 [2024-10-07 07:49:27.485327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.709 [2024-10-07 07:49:27.485402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.709 [2024-10-07 07:49:27.485418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.709 [2024-10-07 07:49:27.485425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.709 [2024-10-07 07:49:27.485431] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.709 [2024-10-07 07:49:27.485446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.709 qpair failed and we were unable to recover it. 00:30:23.709 [2024-10-07 07:49:27.495362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.709 [2024-10-07 07:49:27.495446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.709 [2024-10-07 07:49:27.495460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.709 [2024-10-07 07:49:27.495467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.709 [2024-10-07 07:49:27.495473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.709 [2024-10-07 07:49:27.495488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.709 qpair failed and we were unable to recover it. 00:30:23.709 [2024-10-07 07:49:27.505387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.709 [2024-10-07 07:49:27.505471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.709 [2024-10-07 07:49:27.505485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.709 [2024-10-07 07:49:27.505492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.709 [2024-10-07 07:49:27.505498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.709 [2024-10-07 07:49:27.505513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.709 qpair failed and we were unable to recover it. 00:30:23.709 [2024-10-07 07:49:27.515434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.709 [2024-10-07 07:49:27.515508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.709 [2024-10-07 07:49:27.515523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.709 [2024-10-07 07:49:27.515530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.709 [2024-10-07 07:49:27.515536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.709 [2024-10-07 07:49:27.515555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.709 qpair failed and we were unable to recover it. 00:30:23.709 [2024-10-07 07:49:27.525509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.709 [2024-10-07 07:49:27.525616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.709 [2024-10-07 07:49:27.525630] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.709 [2024-10-07 07:49:27.525637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.709 [2024-10-07 07:49:27.525644] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.709 [2024-10-07 07:49:27.525660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.709 qpair failed and we were unable to recover it. 00:30:23.709 [2024-10-07 07:49:27.535508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.710 [2024-10-07 07:49:27.535621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.710 [2024-10-07 07:49:27.535636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.710 [2024-10-07 07:49:27.535644] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.710 [2024-10-07 07:49:27.535650] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.710 [2024-10-07 07:49:27.535666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.710 qpair failed and we were unable to recover it. 00:30:23.710 [2024-10-07 07:49:27.545515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.710 [2024-10-07 07:49:27.545581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.710 [2024-10-07 07:49:27.545595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.710 [2024-10-07 07:49:27.545603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.710 [2024-10-07 07:49:27.545609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.710 [2024-10-07 07:49:27.545624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.710 qpair failed and we were unable to recover it. 00:30:23.710 [2024-10-07 07:49:27.555504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.710 [2024-10-07 07:49:27.555627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.710 [2024-10-07 07:49:27.555643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.710 [2024-10-07 07:49:27.555650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.710 [2024-10-07 07:49:27.555656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.710 [2024-10-07 07:49:27.555672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.710 qpair failed and we were unable to recover it. 00:30:23.710 [2024-10-07 07:49:27.565567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.710 [2024-10-07 07:49:27.565635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.710 [2024-10-07 07:49:27.565654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.710 [2024-10-07 07:49:27.565661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.710 [2024-10-07 07:49:27.565668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.710 [2024-10-07 07:49:27.565683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.710 qpair failed and we were unable to recover it. 00:30:23.710 [2024-10-07 07:49:27.575600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.710 [2024-10-07 07:49:27.575674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.710 [2024-10-07 07:49:27.575691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.710 [2024-10-07 07:49:27.575698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.710 [2024-10-07 07:49:27.575704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.710 [2024-10-07 07:49:27.575720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.710 qpair failed and we were unable to recover it. 00:30:23.710 [2024-10-07 07:49:27.585598] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.710 [2024-10-07 07:49:27.585666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.710 [2024-10-07 07:49:27.585681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.710 [2024-10-07 07:49:27.585688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.710 [2024-10-07 07:49:27.585694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.710 [2024-10-07 07:49:27.585709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.710 qpair failed and we were unable to recover it. 00:30:23.710 [2024-10-07 07:49:27.595632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.710 [2024-10-07 07:49:27.595700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.710 [2024-10-07 07:49:27.595715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.710 [2024-10-07 07:49:27.595722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.710 [2024-10-07 07:49:27.595728] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.710 [2024-10-07 07:49:27.595743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.710 qpair failed and we were unable to recover it. 00:30:23.710 [2024-10-07 07:49:27.605689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.710 [2024-10-07 07:49:27.605770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.710 [2024-10-07 07:49:27.605785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.710 [2024-10-07 07:49:27.605792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.710 [2024-10-07 07:49:27.605798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.710 [2024-10-07 07:49:27.605816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.710 qpair failed and we were unable to recover it. 00:30:23.710 [2024-10-07 07:49:27.615701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.710 [2024-10-07 07:49:27.615768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.710 [2024-10-07 07:49:27.615783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.710 [2024-10-07 07:49:27.615790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.710 [2024-10-07 07:49:27.615797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.710 [2024-10-07 07:49:27.615812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.710 qpair failed and we were unable to recover it. 00:30:23.710 [2024-10-07 07:49:27.625770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.710 [2024-10-07 07:49:27.625888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.710 [2024-10-07 07:49:27.625905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.710 [2024-10-07 07:49:27.625912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.710 [2024-10-07 07:49:27.625919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.710 [2024-10-07 07:49:27.625935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.710 qpair failed and we were unable to recover it. 00:30:23.710 [2024-10-07 07:49:27.635827] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.710 [2024-10-07 07:49:27.635910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.710 [2024-10-07 07:49:27.635924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.710 [2024-10-07 07:49:27.635931] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.710 [2024-10-07 07:49:27.635938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.710 [2024-10-07 07:49:27.635953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.710 qpair failed and we were unable to recover it. 00:30:23.710 [2024-10-07 07:49:27.645807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.710 [2024-10-07 07:49:27.645873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.710 [2024-10-07 07:49:27.645889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.710 [2024-10-07 07:49:27.645896] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.710 [2024-10-07 07:49:27.645902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.710 [2024-10-07 07:49:27.645918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.710 qpair failed and we were unable to recover it. 00:30:23.710 [2024-10-07 07:49:27.655892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.710 [2024-10-07 07:49:27.655976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.710 [2024-10-07 07:49:27.655993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.710 [2024-10-07 07:49:27.656000] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.710 [2024-10-07 07:49:27.656006] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.710 [2024-10-07 07:49:27.656022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.710 qpair failed and we were unable to recover it. 00:30:23.710 [2024-10-07 07:49:27.665863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.710 [2024-10-07 07:49:27.665930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.710 [2024-10-07 07:49:27.665945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.710 [2024-10-07 07:49:27.665952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.711 [2024-10-07 07:49:27.665958] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.711 [2024-10-07 07:49:27.665973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.711 qpair failed and we were unable to recover it. 00:30:23.711 [2024-10-07 07:49:27.675844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.711 [2024-10-07 07:49:27.675919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.711 [2024-10-07 07:49:27.675934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.711 [2024-10-07 07:49:27.675941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.711 [2024-10-07 07:49:27.675948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.711 [2024-10-07 07:49:27.675963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.711 qpair failed and we were unable to recover it. 00:30:23.971 [2024-10-07 07:49:27.685906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.971 [2024-10-07 07:49:27.685975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.971 [2024-10-07 07:49:27.685990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.971 [2024-10-07 07:49:27.685997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.971 [2024-10-07 07:49:27.686003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.971 [2024-10-07 07:49:27.686018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-10-07 07:49:27.695945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.971 [2024-10-07 07:49:27.696016] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.971 [2024-10-07 07:49:27.696032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.971 [2024-10-07 07:49:27.696039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.971 [2024-10-07 07:49:27.696049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.971 [2024-10-07 07:49:27.696068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-10-07 07:49:27.705965] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.971 [2024-10-07 07:49:27.706038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.971 [2024-10-07 07:49:27.706054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.971 [2024-10-07 07:49:27.706066] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.971 [2024-10-07 07:49:27.706073] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.971 [2024-10-07 07:49:27.706089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-10-07 07:49:27.716043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.971 [2024-10-07 07:49:27.716147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.971 [2024-10-07 07:49:27.716161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.971 [2024-10-07 07:49:27.716168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.971 [2024-10-07 07:49:27.716174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.971 [2024-10-07 07:49:27.716191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-10-07 07:49:27.726044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.971 [2024-10-07 07:49:27.726121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.971 [2024-10-07 07:49:27.726136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.971 [2024-10-07 07:49:27.726144] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.971 [2024-10-07 07:49:27.726150] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.971 [2024-10-07 07:49:27.726166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-10-07 07:49:27.736074] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.971 [2024-10-07 07:49:27.736183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.971 [2024-10-07 07:49:27.736197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.971 [2024-10-07 07:49:27.736205] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.971 [2024-10-07 07:49:27.736211] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.971 [2024-10-07 07:49:27.736228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-10-07 07:49:27.746095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.971 [2024-10-07 07:49:27.746162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.971 [2024-10-07 07:49:27.746176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.971 [2024-10-07 07:49:27.746184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.971 [2024-10-07 07:49:27.746190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.971 [2024-10-07 07:49:27.746206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-10-07 07:49:27.756111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.971 [2024-10-07 07:49:27.756180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.971 [2024-10-07 07:49:27.756194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.971 [2024-10-07 07:49:27.756202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.971 [2024-10-07 07:49:27.756208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.971 [2024-10-07 07:49:27.756223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-10-07 07:49:27.766148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.971 [2024-10-07 07:49:27.766221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.971 [2024-10-07 07:49:27.766236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.971 [2024-10-07 07:49:27.766244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.971 [2024-10-07 07:49:27.766250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.971 [2024-10-07 07:49:27.766266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-10-07 07:49:27.776184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.971 [2024-10-07 07:49:27.776257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.971 [2024-10-07 07:49:27.776272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.971 [2024-10-07 07:49:27.776279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.971 [2024-10-07 07:49:27.776285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.971 [2024-10-07 07:49:27.776301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-10-07 07:49:27.786216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.971 [2024-10-07 07:49:27.786290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.971 [2024-10-07 07:49:27.786305] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.971 [2024-10-07 07:49:27.786312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.972 [2024-10-07 07:49:27.786323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.972 [2024-10-07 07:49:27.786339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-10-07 07:49:27.796236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.972 [2024-10-07 07:49:27.796307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.972 [2024-10-07 07:49:27.796323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.972 [2024-10-07 07:49:27.796332] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.972 [2024-10-07 07:49:27.796338] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.972 [2024-10-07 07:49:27.796354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-10-07 07:49:27.806301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.972 [2024-10-07 07:49:27.806405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.972 [2024-10-07 07:49:27.806420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.972 [2024-10-07 07:49:27.806427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.972 [2024-10-07 07:49:27.806434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.972 [2024-10-07 07:49:27.806450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-10-07 07:49:27.816302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.972 [2024-10-07 07:49:27.816373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.972 [2024-10-07 07:49:27.816387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.972 [2024-10-07 07:49:27.816395] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.972 [2024-10-07 07:49:27.816401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.972 [2024-10-07 07:49:27.816416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-10-07 07:49:27.826342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.972 [2024-10-07 07:49:27.826413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.972 [2024-10-07 07:49:27.826428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.972 [2024-10-07 07:49:27.826435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.972 [2024-10-07 07:49:27.826442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.972 [2024-10-07 07:49:27.826457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-10-07 07:49:27.836379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.972 [2024-10-07 07:49:27.836486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.972 [2024-10-07 07:49:27.836501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.972 [2024-10-07 07:49:27.836509] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.972 [2024-10-07 07:49:27.836516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.972 [2024-10-07 07:49:27.836531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-10-07 07:49:27.846390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.972 [2024-10-07 07:49:27.846467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.972 [2024-10-07 07:49:27.846483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.972 [2024-10-07 07:49:27.846491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.972 [2024-10-07 07:49:27.846497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.972 [2024-10-07 07:49:27.846513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-10-07 07:49:27.856412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.972 [2024-10-07 07:49:27.856481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.972 [2024-10-07 07:49:27.856496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.972 [2024-10-07 07:49:27.856503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.972 [2024-10-07 07:49:27.856510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.972 [2024-10-07 07:49:27.856525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-10-07 07:49:27.866446] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.972 [2024-10-07 07:49:27.866515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.972 [2024-10-07 07:49:27.866530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.972 [2024-10-07 07:49:27.866538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.972 [2024-10-07 07:49:27.866544] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.972 [2024-10-07 07:49:27.866559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-10-07 07:49:27.876427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.972 [2024-10-07 07:49:27.876499] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.972 [2024-10-07 07:49:27.876514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.972 [2024-10-07 07:49:27.876524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.972 [2024-10-07 07:49:27.876530] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.972 [2024-10-07 07:49:27.876546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-10-07 07:49:27.886502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.972 [2024-10-07 07:49:27.886576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.972 [2024-10-07 07:49:27.886591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.972 [2024-10-07 07:49:27.886598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.972 [2024-10-07 07:49:27.886604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.972 [2024-10-07 07:49:27.886619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-10-07 07:49:27.896512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.972 [2024-10-07 07:49:27.896602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.972 [2024-10-07 07:49:27.896617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.972 [2024-10-07 07:49:27.896624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.972 [2024-10-07 07:49:27.896630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.972 [2024-10-07 07:49:27.896646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-10-07 07:49:27.906549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.972 [2024-10-07 07:49:27.906617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.972 [2024-10-07 07:49:27.906633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.972 [2024-10-07 07:49:27.906640] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.972 [2024-10-07 07:49:27.906647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.972 [2024-10-07 07:49:27.906663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-10-07 07:49:27.916572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.972 [2024-10-07 07:49:27.916651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.972 [2024-10-07 07:49:27.916666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.972 [2024-10-07 07:49:27.916674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.972 [2024-10-07 07:49:27.916680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.972 [2024-10-07 07:49:27.916695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-10-07 07:49:27.926597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.973 [2024-10-07 07:49:27.926678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.973 [2024-10-07 07:49:27.926693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.973 [2024-10-07 07:49:27.926700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.973 [2024-10-07 07:49:27.926706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.973 [2024-10-07 07:49:27.926721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-10-07 07:49:27.936641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.973 [2024-10-07 07:49:27.936717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.973 [2024-10-07 07:49:27.936731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.973 [2024-10-07 07:49:27.936738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.973 [2024-10-07 07:49:27.936744] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:23.973 [2024-10-07 07:49:27.936759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.973 qpair failed and we were unable to recover it. 00:30:24.232 [2024-10-07 07:49:27.946639] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.232 [2024-10-07 07:49:27.946707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.232 [2024-10-07 07:49:27.946722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.232 [2024-10-07 07:49:27.946729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.232 [2024-10-07 07:49:27.946736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:24.232 [2024-10-07 07:49:27.946750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.232 qpair failed and we were unable to recover it. 00:30:24.232 [2024-10-07 07:49:27.956693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.232 [2024-10-07 07:49:27.956763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.232 [2024-10-07 07:49:27.956777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.232 [2024-10-07 07:49:27.956784] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.232 [2024-10-07 07:49:27.956790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:24.232 [2024-10-07 07:49:27.956805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.232 qpair failed and we were unable to recover it. 00:30:24.232 [2024-10-07 07:49:27.966636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.232 [2024-10-07 07:49:27.966711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.232 [2024-10-07 07:49:27.966728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.232 [2024-10-07 07:49:27.966736] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.232 [2024-10-07 07:49:27.966742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:24.232 [2024-10-07 07:49:27.966757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.232 qpair failed and we were unable to recover it. 00:30:24.232 [2024-10-07 07:49:27.976756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.232 [2024-10-07 07:49:27.976823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.232 [2024-10-07 07:49:27.976838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.232 [2024-10-07 07:49:27.976846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.232 [2024-10-07 07:49:27.976852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:24.232 [2024-10-07 07:49:27.976867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.232 qpair failed and we were unable to recover it. 00:30:24.232 [2024-10-07 07:49:27.986770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.232 [2024-10-07 07:49:27.986847] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.232 [2024-10-07 07:49:27.986862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.232 [2024-10-07 07:49:27.986869] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.233 [2024-10-07 07:49:27.986875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:24.233 [2024-10-07 07:49:27.986890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.233 qpair failed and we were unable to recover it. 00:30:24.233 [2024-10-07 07:49:27.996804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.233 [2024-10-07 07:49:27.996872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.233 [2024-10-07 07:49:27.996886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.233 [2024-10-07 07:49:27.996893] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.233 [2024-10-07 07:49:27.996900] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:24.233 [2024-10-07 07:49:27.996915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.233 qpair failed and we were unable to recover it. 00:30:24.233 [2024-10-07 07:49:28.006837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.233 [2024-10-07 07:49:28.006904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.233 [2024-10-07 07:49:28.006919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.233 [2024-10-07 07:49:28.006927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.233 [2024-10-07 07:49:28.006933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:24.233 [2024-10-07 07:49:28.006948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.233 qpair failed and we were unable to recover it. 00:30:24.233 [2024-10-07 07:49:28.016873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.233 [2024-10-07 07:49:28.016939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.233 [2024-10-07 07:49:28.016954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.233 [2024-10-07 07:49:28.016961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.233 [2024-10-07 07:49:28.016968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:24.233 [2024-10-07 07:49:28.016984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.233 qpair failed and we were unable to recover it. 00:30:24.233 [2024-10-07 07:49:28.026938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.233 [2024-10-07 07:49:28.027014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.233 [2024-10-07 07:49:28.027029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.233 [2024-10-07 07:49:28.027036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.233 [2024-10-07 07:49:28.027042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:24.233 [2024-10-07 07:49:28.027063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.233 qpair failed and we were unable to recover it. 00:30:24.233 [2024-10-07 07:49:28.036943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.233 [2024-10-07 07:49:28.037061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.233 [2024-10-07 07:49:28.037076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.233 [2024-10-07 07:49:28.037083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.233 [2024-10-07 07:49:28.037090] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfac000b90 00:30:24.233 [2024-10-07 07:49:28.037107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.233 qpair failed and we were unable to recover it. 00:30:24.233 [2024-10-07 07:49:28.037478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e5900 is same with the state(5) to be set 00:30:24.233 [2024-10-07 07:49:28.047000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.233 [2024-10-07 07:49:28.047122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.233 [2024-10-07 07:49:28.047156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.233 [2024-10-07 07:49:28.047171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.233 [2024-10-07 07:49:28.047182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d7ea0 00:30:24.233 [2024-10-07 07:49:28.047208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.233 qpair failed and we were unable to recover it. 00:30:24.233 [2024-10-07 07:49:28.056905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.233 [2024-10-07 07:49:28.056983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.233 [2024-10-07 07:49:28.057002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.233 [2024-10-07 07:49:28.057009] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.233 [2024-10-07 07:49:28.057016] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7d7ea0 00:30:24.233 [2024-10-07 07:49:28.057032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.233 qpair failed and we were unable to recover it. 00:30:24.233 [2024-10-07 07:49:28.067016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.233 [2024-10-07 07:49:28.067160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.233 [2024-10-07 07:49:28.067193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.233 [2024-10-07 07:49:28.067206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.233 [2024-10-07 07:49:28.067217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfb8000b90 00:30:24.233 [2024-10-07 07:49:28.067244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.233 qpair failed and we were unable to recover it. 00:30:24.233 [2024-10-07 07:49:28.077047] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.233 [2024-10-07 07:49:28.077131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.233 [2024-10-07 07:49:28.077150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.233 [2024-10-07 07:49:28.077160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.233 [2024-10-07 07:49:28.077168] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfb8000b90 00:30:24.233 [2024-10-07 07:49:28.077186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.233 qpair failed and we were unable to recover it. 00:30:24.233 [2024-10-07 07:49:28.087099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.233 [2024-10-07 07:49:28.087183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.233 [2024-10-07 07:49:28.087205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.233 [2024-10-07 07:49:28.087214] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.233 [2024-10-07 07:49:28.087221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfb0000b90 00:30:24.233 [2024-10-07 07:49:28.087240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.233 qpair failed and we were unable to recover it. 00:30:24.233 [2024-10-07 07:49:28.097036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.233 [2024-10-07 07:49:28.097107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.233 [2024-10-07 07:49:28.097125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.233 [2024-10-07 07:49:28.097133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.233 [2024-10-07 07:49:28.097143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbfb0000b90 00:30:24.233 [2024-10-07 07:49:28.097160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:24.233 qpair failed and we were unable to recover it. 00:30:24.233 [2024-10-07 07:49:28.097480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e5900 (9): Bad file descriptor 00:30:24.233 Initializing NVMe Controllers 00:30:24.233 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:24.233 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:24.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:24.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:24.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:24.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:24.233 Initialization complete. Launching workers. 00:30:24.233 Starting thread on core 1 00:30:24.233 Starting thread on core 2 00:30:24.233 Starting thread on core 3 00:30:24.233 Starting thread on core 0 00:30:24.233 07:49:28 -- host/target_disconnect.sh@59 -- # sync 00:30:24.233 00:30:24.233 real 0m11.337s 00:30:24.233 user 0m21.058s 00:30:24.233 sys 0m4.352s 00:30:24.233 07:49:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:24.233 07:49:28 -- common/autotest_common.sh@10 -- # set +x 00:30:24.233 ************************************ 00:30:24.233 END TEST nvmf_target_disconnect_tc2 00:30:24.233 ************************************ 00:30:24.233 07:49:28 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:30:24.233 07:49:28 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:24.233 07:49:28 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:30:24.233 07:49:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:24.234 07:49:28 -- nvmf/common.sh@116 -- # sync 00:30:24.234 07:49:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:24.234 07:49:28 -- nvmf/common.sh@119 -- # set +e 00:30:24.234 07:49:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:24.234 07:49:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:24.234 rmmod nvme_tcp 00:30:24.234 rmmod nvme_fabrics 00:30:24.234 rmmod nvme_keyring 00:30:24.234 07:49:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:24.493 07:49:28 -- nvmf/common.sh@123 -- # set -e 00:30:24.493 07:49:28 -- nvmf/common.sh@124 -- # return 0 00:30:24.493 07:49:28 -- nvmf/common.sh@477 -- # '[' -n 107152 ']' 00:30:24.493 07:49:28 -- nvmf/common.sh@478 -- # killprocess 107152 00:30:24.493 07:49:28 -- common/autotest_common.sh@926 -- # '[' -z 107152 ']' 00:30:24.493 07:49:28 -- common/autotest_common.sh@930 -- # kill -0 107152 00:30:24.493 07:49:28 -- common/autotest_common.sh@931 -- # uname 00:30:24.493 07:49:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:24.493 07:49:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107152 00:30:24.493 07:49:28 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:30:24.493 07:49:28 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:30:24.493 07:49:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107152' 00:30:24.493 killing process with pid 107152 00:30:24.494 07:49:28 -- common/autotest_common.sh@945 -- # kill 107152 00:30:24.494 07:49:28 -- common/autotest_common.sh@950 -- # wait 107152 00:30:24.753 07:49:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:24.753 07:49:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:24.753 07:49:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:24.753 07:49:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:24.753 07:49:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:24.753 07:49:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.753 07:49:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:24.753 07:49:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.661 07:49:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:26.661 00:30:26.661 real 0m19.033s 00:30:26.661 user 0m48.198s 00:30:26.661 sys 0m8.498s 00:30:26.661 07:49:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:26.661 07:49:30 -- common/autotest_common.sh@10 -- # set +x 00:30:26.661 ************************************ 00:30:26.661 END TEST nvmf_target_disconnect 00:30:26.661 ************************************ 00:30:26.661 07:49:30 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:30:26.661 07:49:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:26.661 07:49:30 -- common/autotest_common.sh@10 -- # set +x 00:30:26.661 07:49:30 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:30:26.661 00:30:26.661 real 23m33.668s 00:30:26.661 user 63m50.945s 00:30:26.661 sys 6m15.587s 00:30:26.661 07:49:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:26.661 07:49:30 -- common/autotest_common.sh@10 -- # set +x 00:30:26.661 ************************************ 00:30:26.661 END TEST nvmf_tcp 00:30:26.661 ************************************ 00:30:26.922 07:49:30 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:30:26.922 07:49:30 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:26.922 07:49:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:26.922 07:49:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:26.922 07:49:30 -- common/autotest_common.sh@10 -- # set +x 00:30:26.922 ************************************ 00:30:26.922 START TEST spdkcli_nvmf_tcp 00:30:26.922 ************************************ 00:30:26.922 07:49:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:26.922 * Looking for test storage... 00:30:26.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:26.922 07:49:30 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:26.922 07:49:30 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:26.922 07:49:30 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:26.922 07:49:30 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.922 07:49:30 -- nvmf/common.sh@7 -- # uname -s 00:30:26.922 07:49:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.922 07:49:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.922 07:49:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.922 07:49:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.922 07:49:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.922 07:49:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.922 07:49:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.922 07:49:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.922 07:49:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.922 07:49:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.922 07:49:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:26.922 07:49:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:26.922 07:49:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.922 07:49:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.922 07:49:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.922 07:49:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.922 07:49:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.922 07:49:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.922 07:49:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.922 07:49:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.922 07:49:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.922 07:49:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.922 07:49:30 -- paths/export.sh@5 -- # export PATH 00:30:26.922 07:49:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.922 07:49:30 -- nvmf/common.sh@46 -- # : 0 00:30:26.922 07:49:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:26.922 07:49:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:26.922 07:49:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:26.922 07:49:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.922 07:49:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.922 07:49:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:26.922 07:49:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:26.922 07:49:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:26.922 07:49:30 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:26.922 07:49:30 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:26.922 07:49:30 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:26.922 07:49:30 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:26.922 07:49:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:26.922 07:49:30 -- common/autotest_common.sh@10 -- # set +x 00:30:26.922 07:49:30 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:26.922 07:49:30 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=108662 00:30:26.922 07:49:30 -- spdkcli/common.sh@34 -- # waitforlisten 108662 00:30:26.922 07:49:30 -- common/autotest_common.sh@819 -- # '[' -z 108662 ']' 00:30:26.922 07:49:30 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:26.922 07:49:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.922 07:49:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:26.922 07:49:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.922 07:49:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:26.922 07:49:30 -- common/autotest_common.sh@10 -- # set +x 00:30:26.922 [2024-10-07 07:49:30.801565] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:26.922 [2024-10-07 07:49:30.801614] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108662 ] 00:30:26.922 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.922 [2024-10-07 07:49:30.853468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:27.182 [2024-10-07 07:49:30.925429] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:27.182 [2024-10-07 07:49:30.925574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.182 [2024-10-07 07:49:30.925577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.750 07:49:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:27.750 07:49:31 -- common/autotest_common.sh@852 -- # return 0 00:30:27.750 07:49:31 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:27.750 07:49:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:27.750 07:49:31 -- common/autotest_common.sh@10 -- # set +x 00:30:27.750 07:49:31 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:27.750 07:49:31 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:27.750 07:49:31 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:27.750 07:49:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:27.750 07:49:31 -- common/autotest_common.sh@10 -- # set +x 00:30:27.750 07:49:31 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:27.750 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:27.750 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:27.750 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:27.750 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:27.750 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:27.750 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:27.750 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:27.750 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:27.750 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:27.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:27.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:27.751 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:27.751 ' 00:30:28.319 [2024-10-07 07:49:32.005843] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:30.224 [2024-10-07 07:49:34.049875] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.599 [2024-10-07 07:49:35.237849] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:33.502 [2024-10-07 07:49:37.428682] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:35.415 [2024-10-07 07:49:39.310788] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:36.793 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:36.793 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:36.793 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:36.793 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:36.793 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:36.793 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:36.793 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:36.794 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:36.794 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:36.794 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:36.794 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:36.794 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:37.053 07:49:40 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:37.053 07:49:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:37.053 07:49:40 -- common/autotest_common.sh@10 -- # set +x 00:30:37.053 07:49:40 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:37.053 07:49:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:37.053 07:49:40 -- common/autotest_common.sh@10 -- # set +x 00:30:37.053 07:49:40 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:37.053 07:49:40 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:37.312 07:49:41 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:37.576 07:49:41 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:37.576 07:49:41 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:37.576 07:49:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:37.576 07:49:41 -- common/autotest_common.sh@10 -- # set +x 00:30:37.576 07:49:41 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:37.576 07:49:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:37.576 07:49:41 -- common/autotest_common.sh@10 -- # set +x 00:30:37.576 07:49:41 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:37.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:37.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:37.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:37.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:37.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:37.576 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:37.576 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:37.576 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:37.576 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:37.576 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:37.576 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:37.576 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:37.577 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:37.577 ' 00:30:42.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:42.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:42.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:42.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:42.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:42.973 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:42.973 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:42.973 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:42.973 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:42.973 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:42.973 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:42.973 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:42.973 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:42.973 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:42.973 07:49:46 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:42.973 07:49:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:42.973 07:49:46 -- common/autotest_common.sh@10 -- # set +x 00:30:42.973 07:49:46 -- spdkcli/nvmf.sh@90 -- # killprocess 108662 00:30:42.973 07:49:46 -- common/autotest_common.sh@926 -- # '[' -z 108662 ']' 00:30:42.973 07:49:46 -- common/autotest_common.sh@930 -- # kill -0 108662 00:30:42.973 07:49:46 -- common/autotest_common.sh@931 -- # uname 00:30:42.973 07:49:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:42.973 07:49:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108662 00:30:42.973 07:49:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:42.973 07:49:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:42.973 07:49:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108662' 00:30:42.973 killing process with pid 108662 00:30:42.973 07:49:46 -- common/autotest_common.sh@945 -- # kill 108662 00:30:42.973 [2024-10-07 07:49:46.445278] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:42.973 07:49:46 -- common/autotest_common.sh@950 -- # wait 108662 00:30:42.973 07:49:46 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:42.973 07:49:46 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:42.973 07:49:46 -- spdkcli/common.sh@13 -- # '[' -n 108662 ']' 00:30:42.973 07:49:46 -- spdkcli/common.sh@14 -- # killprocess 108662 00:30:42.973 07:49:46 -- common/autotest_common.sh@926 -- # '[' -z 108662 ']' 00:30:42.973 07:49:46 -- common/autotest_common.sh@930 -- # kill -0 108662 00:30:42.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (108662) - No such process 00:30:42.973 07:49:46 -- common/autotest_common.sh@953 -- # echo 'Process with pid 108662 is not found' 00:30:42.973 Process with pid 108662 is not found 00:30:42.973 07:49:46 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:42.973 07:49:46 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:42.973 07:49:46 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:42.973 00:30:42.973 real 0m16.004s 00:30:42.973 user 0m33.386s 00:30:42.973 sys 0m0.671s 00:30:42.973 07:49:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:42.973 07:49:46 -- common/autotest_common.sh@10 -- # set +x 00:30:42.973 ************************************ 00:30:42.973 END TEST spdkcli_nvmf_tcp 00:30:42.973 ************************************ 00:30:42.973 07:49:46 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:42.973 07:49:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:42.973 07:49:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:42.973 07:49:46 -- common/autotest_common.sh@10 -- # set +x 00:30:42.973 ************************************ 00:30:42.973 START TEST nvmf_identify_passthru 00:30:42.973 ************************************ 00:30:42.973 07:49:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:42.973 * Looking for test storage... 00:30:42.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:42.973 07:49:46 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.973 07:49:46 -- nvmf/common.sh@7 -- # uname -s 00:30:42.973 07:49:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.973 07:49:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.973 07:49:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.973 07:49:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.973 07:49:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.973 07:49:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.973 07:49:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.973 07:49:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.973 07:49:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.973 07:49:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.973 07:49:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:42.973 07:49:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:42.973 07:49:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.973 07:49:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.973 07:49:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.973 07:49:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.973 07:49:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.973 07:49:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.973 07:49:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.973 07:49:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.973 07:49:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.973 07:49:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.973 07:49:46 -- paths/export.sh@5 -- # export PATH 00:30:42.973 07:49:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.973 07:49:46 -- nvmf/common.sh@46 -- # : 0 00:30:42.973 07:49:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:42.973 07:49:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:42.973 07:49:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:42.973 07:49:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.973 07:49:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.973 07:49:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:42.973 07:49:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:42.973 07:49:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:42.973 07:49:46 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.973 07:49:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.973 07:49:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.973 07:49:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.973 07:49:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.973 07:49:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.973 07:49:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.973 07:49:46 -- paths/export.sh@5 -- # export PATH 00:30:42.974 07:49:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.974 07:49:46 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:42.974 07:49:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:42.974 07:49:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.974 07:49:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:42.974 07:49:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:42.974 07:49:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:42.974 07:49:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.974 07:49:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:42.974 07:49:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.974 07:49:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:42.974 07:49:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:42.974 07:49:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:42.974 07:49:46 -- common/autotest_common.sh@10 -- # set +x 00:30:48.263 07:49:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:48.263 07:49:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:48.263 07:49:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:48.263 07:49:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:48.263 07:49:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:48.263 07:49:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:48.263 07:49:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:48.263 07:49:51 -- nvmf/common.sh@294 -- # net_devs=() 00:30:48.263 07:49:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:48.263 07:49:51 -- nvmf/common.sh@295 -- # e810=() 00:30:48.263 07:49:51 -- nvmf/common.sh@295 -- # local -ga e810 00:30:48.263 07:49:51 -- nvmf/common.sh@296 -- # x722=() 00:30:48.263 07:49:51 -- nvmf/common.sh@296 -- # local -ga x722 00:30:48.263 07:49:51 -- nvmf/common.sh@297 -- # mlx=() 00:30:48.263 07:49:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:48.263 07:49:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.263 07:49:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.263 07:49:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.263 07:49:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.263 07:49:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.263 07:49:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.263 07:49:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.263 07:49:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.263 07:49:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.263 07:49:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.263 07:49:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.263 07:49:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:48.263 07:49:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:48.263 07:49:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:48.263 07:49:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:48.263 07:49:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:48.263 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:48.263 07:49:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:48.263 07:49:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:48.263 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:48.263 07:49:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:48.263 07:49:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:48.263 07:49:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.263 07:49:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:48.263 07:49:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.263 07:49:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:48.263 Found net devices under 0000:af:00.0: cvl_0_0 00:30:48.263 07:49:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.263 07:49:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:48.263 07:49:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.263 07:49:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:48.263 07:49:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.263 07:49:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:48.263 Found net devices under 0000:af:00.1: cvl_0_1 00:30:48.263 07:49:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.263 07:49:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:48.263 07:49:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:48.263 07:49:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:48.263 07:49:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:48.263 07:49:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.263 07:49:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.263 07:49:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.263 07:49:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:48.263 07:49:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.263 07:49:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.263 07:49:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:48.263 07:49:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.263 07:49:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.263 07:49:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:48.263 07:49:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:48.263 07:49:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.263 07:49:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.263 07:49:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.263 07:49:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.263 07:49:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:48.263 07:49:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.263 07:49:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.263 07:49:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.263 07:49:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:48.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:30:48.263 00:30:48.263 --- 10.0.0.2 ping statistics --- 00:30:48.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.263 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:30:48.263 07:49:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:30:48.263 00:30:48.263 --- 10.0.0.1 ping statistics --- 00:30:48.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.263 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:30:48.263 07:49:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.263 07:49:52 -- nvmf/common.sh@410 -- # return 0 00:30:48.263 07:49:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:48.263 07:49:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.263 07:49:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:48.263 07:49:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:48.263 07:49:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.263 07:49:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:48.263 07:49:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:48.263 07:49:52 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:48.263 07:49:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:48.263 07:49:52 -- common/autotest_common.sh@10 -- # set +x 00:30:48.263 07:49:52 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:48.263 07:49:52 -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:48.263 07:49:52 -- common/autotest_common.sh@1509 -- # local bdfs 00:30:48.263 07:49:52 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:30:48.263 07:49:52 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:30:48.263 07:49:52 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:48.263 07:49:52 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:48.264 07:49:52 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:48.264 07:49:52 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:48.264 07:49:52 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:48.522 07:49:52 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:48.522 07:49:52 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:30:48.522 07:49:52 -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:30:48.522 07:49:52 -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:30:48.522 07:49:52 -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:30:48.522 07:49:52 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:30:48.522 07:49:52 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:48.522 07:49:52 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:48.522 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.706 07:49:56 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:30:52.706 07:49:56 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:30:52.706 07:49:56 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:52.706 07:49:56 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:52.706 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.899 07:50:00 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:56.899 07:50:00 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:56.899 07:50:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:56.899 07:50:00 -- common/autotest_common.sh@10 -- # set +x 00:30:56.899 07:50:00 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:56.899 07:50:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:56.899 07:50:00 -- common/autotest_common.sh@10 -- # set +x 00:30:56.899 07:50:00 -- target/identify_passthru.sh@31 -- # nvmfpid=115632 00:30:56.899 07:50:00 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:56.899 07:50:00 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:56.899 07:50:00 -- target/identify_passthru.sh@35 -- # waitforlisten 115632 00:30:56.899 07:50:00 -- common/autotest_common.sh@819 -- # '[' -z 115632 ']' 00:30:56.899 07:50:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.899 07:50:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:56.899 07:50:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.900 07:50:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:56.900 07:50:00 -- common/autotest_common.sh@10 -- # set +x 00:30:56.900 [2024-10-07 07:50:00.590699] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:56.900 [2024-10-07 07:50:00.590747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.900 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.900 [2024-10-07 07:50:00.650374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:56.900 [2024-10-07 07:50:00.720859] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:56.900 [2024-10-07 07:50:00.720974] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.900 [2024-10-07 07:50:00.720982] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.900 [2024-10-07 07:50:00.720989] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.900 [2024-10-07 07:50:00.721089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.900 [2024-10-07 07:50:00.721145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:56.900 [2024-10-07 07:50:00.721211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:56.900 [2024-10-07 07:50:00.721212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.468 07:50:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:57.468 07:50:01 -- common/autotest_common.sh@852 -- # return 0 00:30:57.468 07:50:01 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:57.468 07:50:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:57.468 07:50:01 -- common/autotest_common.sh@10 -- # set +x 00:30:57.468 INFO: Log level set to 20 00:30:57.468 INFO: Requests: 00:30:57.468 { 00:30:57.468 "jsonrpc": "2.0", 00:30:57.468 "method": "nvmf_set_config", 00:30:57.468 "id": 1, 00:30:57.468 "params": { 00:30:57.468 "admin_cmd_passthru": { 00:30:57.468 "identify_ctrlr": true 00:30:57.468 } 00:30:57.468 } 00:30:57.468 } 00:30:57.468 00:30:57.468 INFO: response: 00:30:57.468 { 00:30:57.468 "jsonrpc": "2.0", 00:30:57.468 "id": 1, 00:30:57.468 "result": true 00:30:57.468 } 00:30:57.468 00:30:57.468 07:50:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:57.468 07:50:01 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:57.468 07:50:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:57.468 07:50:01 -- common/autotest_common.sh@10 -- # set +x 00:30:57.468 INFO: Setting log level to 20 00:30:57.468 INFO: Setting log level to 20 00:30:57.468 INFO: Log level set to 20 00:30:57.468 INFO: Log level set to 20 00:30:57.468 INFO: Requests: 00:30:57.468 { 00:30:57.468 "jsonrpc": "2.0", 00:30:57.468 "method": "framework_start_init", 00:30:57.468 "id": 1 00:30:57.468 } 00:30:57.468 00:30:57.468 INFO: Requests: 00:30:57.468 { 00:30:57.468 "jsonrpc": "2.0", 00:30:57.468 "method": "framework_start_init", 00:30:57.468 "id": 1 00:30:57.468 } 00:30:57.468 00:30:57.727 [2024-10-07 07:50:01.507923] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:57.727 INFO: response: 00:30:57.727 { 00:30:57.727 "jsonrpc": "2.0", 00:30:57.727 "id": 1, 00:30:57.727 "result": true 00:30:57.727 } 00:30:57.727 00:30:57.727 INFO: response: 00:30:57.727 { 00:30:57.727 "jsonrpc": "2.0", 00:30:57.727 "id": 1, 00:30:57.727 "result": true 00:30:57.727 } 00:30:57.727 00:30:57.727 07:50:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:57.727 07:50:01 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:57.727 07:50:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:57.727 07:50:01 -- common/autotest_common.sh@10 -- # set +x 00:30:57.727 INFO: Setting log level to 40 00:30:57.727 INFO: Setting log level to 40 00:30:57.727 INFO: Setting log level to 40 00:30:57.727 [2024-10-07 07:50:01.521227] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.727 07:50:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:57.727 07:50:01 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:57.727 07:50:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:57.727 07:50:01 -- common/autotest_common.sh@10 -- # set +x 00:30:57.727 07:50:01 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:30:57.727 07:50:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:57.727 07:50:01 -- common/autotest_common.sh@10 -- # set +x 00:31:01.017 Nvme0n1 00:31:01.017 07:50:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:01.017 07:50:04 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:01.017 07:50:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:01.017 07:50:04 -- common/autotest_common.sh@10 -- # set +x 00:31:01.017 07:50:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:01.017 07:50:04 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:01.017 07:50:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:01.017 07:50:04 -- common/autotest_common.sh@10 -- # set +x 00:31:01.017 07:50:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:01.017 07:50:04 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.017 07:50:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:01.017 07:50:04 -- common/autotest_common.sh@10 -- # set +x 00:31:01.017 [2024-10-07 07:50:04.418705] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.017 07:50:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:01.017 07:50:04 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:01.017 07:50:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:01.017 07:50:04 -- common/autotest_common.sh@10 -- # set +x 00:31:01.017 [2024-10-07 07:50:04.426490] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:01.017 [ 00:31:01.017 { 00:31:01.017 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:01.017 "subtype": "Discovery", 00:31:01.017 "listen_addresses": [], 00:31:01.017 "allow_any_host": true, 00:31:01.017 "hosts": [] 00:31:01.017 }, 00:31:01.017 { 00:31:01.017 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:01.017 "subtype": "NVMe", 00:31:01.017 "listen_addresses": [ 00:31:01.017 { 00:31:01.017 "transport": "TCP", 00:31:01.017 "trtype": "TCP", 00:31:01.017 "adrfam": "IPv4", 00:31:01.017 "traddr": "10.0.0.2", 00:31:01.017 "trsvcid": "4420" 00:31:01.017 } 00:31:01.017 ], 00:31:01.017 "allow_any_host": true, 00:31:01.017 "hosts": [], 00:31:01.017 "serial_number": "SPDK00000000000001", 00:31:01.017 "model_number": "SPDK bdev Controller", 00:31:01.017 "max_namespaces": 1, 00:31:01.017 "min_cntlid": 1, 00:31:01.017 "max_cntlid": 65519, 00:31:01.017 "namespaces": [ 00:31:01.017 { 00:31:01.017 "nsid": 1, 00:31:01.017 "bdev_name": "Nvme0n1", 00:31:01.017 "name": "Nvme0n1", 00:31:01.017 "nguid": "27DE84E90F2C478F99CF3E5CD26E2A86", 00:31:01.017 "uuid": "27de84e9-0f2c-478f-99cf-3e5cd26e2a86" 00:31:01.017 } 00:31:01.017 ] 00:31:01.017 } 00:31:01.017 ] 00:31:01.017 07:50:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:01.017 07:50:04 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:01.017 07:50:04 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:01.017 07:50:04 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:01.017 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.017 07:50:04 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:31:01.018 07:50:04 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:01.018 07:50:04 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:01.018 07:50:04 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:01.018 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.018 07:50:04 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:01.018 07:50:04 -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:31:01.018 07:50:04 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:01.018 07:50:04 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:01.018 07:50:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:01.018 07:50:04 -- common/autotest_common.sh@10 -- # set +x 00:31:01.018 07:50:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:01.018 07:50:04 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:01.018 07:50:04 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:01.018 07:50:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:01.018 07:50:04 -- nvmf/common.sh@116 -- # sync 00:31:01.018 07:50:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:01.018 07:50:04 -- nvmf/common.sh@119 -- # set +e 00:31:01.018 07:50:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:01.018 07:50:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:01.018 rmmod nvme_tcp 00:31:01.018 rmmod nvme_fabrics 00:31:01.018 rmmod nvme_keyring 00:31:01.018 07:50:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:01.018 07:50:04 -- nvmf/common.sh@123 -- # set -e 00:31:01.018 07:50:04 -- nvmf/common.sh@124 -- # return 0 00:31:01.018 07:50:04 -- nvmf/common.sh@477 -- # '[' -n 115632 ']' 00:31:01.018 07:50:04 -- nvmf/common.sh@478 -- # killprocess 115632 00:31:01.018 07:50:04 -- common/autotest_common.sh@926 -- # '[' -z 115632 ']' 00:31:01.018 07:50:04 -- common/autotest_common.sh@930 -- # kill -0 115632 00:31:01.018 07:50:04 -- common/autotest_common.sh@931 -- # uname 00:31:01.018 07:50:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:01.018 07:50:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115632 00:31:01.018 07:50:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:01.018 07:50:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:01.018 07:50:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115632' 00:31:01.018 killing process with pid 115632 00:31:01.018 07:50:04 -- common/autotest_common.sh@945 -- # kill 115632 00:31:01.018 [2024-10-07 07:50:04.822285] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:01.018 07:50:04 -- common/autotest_common.sh@950 -- # wait 115632 00:31:02.398 07:50:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:02.398 07:50:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:02.398 07:50:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:02.398 07:50:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:02.398 07:50:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:02.398 07:50:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.398 07:50:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:02.398 07:50:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.935 07:50:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:04.935 00:31:04.935 real 0m21.702s 00:31:04.935 user 0m29.680s 00:31:04.935 sys 0m4.875s 00:31:04.935 07:50:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:04.935 07:50:08 -- common/autotest_common.sh@10 -- # set +x 00:31:04.935 ************************************ 00:31:04.935 END TEST nvmf_identify_passthru 00:31:04.935 ************************************ 00:31:04.935 07:50:08 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:04.935 07:50:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:04.935 07:50:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:04.935 07:50:08 -- common/autotest_common.sh@10 -- # set +x 00:31:04.935 ************************************ 00:31:04.935 START TEST nvmf_dif 00:31:04.935 ************************************ 00:31:04.935 07:50:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:04.935 * Looking for test storage... 00:31:04.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:04.935 07:50:08 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.935 07:50:08 -- nvmf/common.sh@7 -- # uname -s 00:31:04.935 07:50:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.935 07:50:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.935 07:50:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.935 07:50:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.935 07:50:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.935 07:50:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.935 07:50:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.935 07:50:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.935 07:50:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.935 07:50:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.935 07:50:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:04.935 07:50:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:04.935 07:50:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.935 07:50:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.935 07:50:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.935 07:50:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.935 07:50:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.935 07:50:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.935 07:50:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.935 07:50:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.935 07:50:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.935 07:50:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.935 07:50:08 -- paths/export.sh@5 -- # export PATH 00:31:04.935 07:50:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.935 07:50:08 -- nvmf/common.sh@46 -- # : 0 00:31:04.935 07:50:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:04.935 07:50:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:04.935 07:50:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:04.935 07:50:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.935 07:50:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.935 07:50:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:04.935 07:50:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:04.935 07:50:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:04.935 07:50:08 -- target/dif.sh@15 -- # NULL_META=16 00:31:04.935 07:50:08 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:04.935 07:50:08 -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:04.935 07:50:08 -- target/dif.sh@15 -- # NULL_DIF=1 00:31:04.935 07:50:08 -- target/dif.sh@135 -- # nvmftestinit 00:31:04.935 07:50:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:04.935 07:50:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.935 07:50:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:04.935 07:50:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:04.935 07:50:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:04.935 07:50:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.935 07:50:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:04.935 07:50:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.935 07:50:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:04.935 07:50:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:04.935 07:50:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:04.935 07:50:08 -- common/autotest_common.sh@10 -- # set +x 00:31:10.205 07:50:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:10.205 07:50:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:10.205 07:50:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:10.205 07:50:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:10.205 07:50:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:10.205 07:50:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:10.205 07:50:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:10.205 07:50:13 -- nvmf/common.sh@294 -- # net_devs=() 00:31:10.205 07:50:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:10.205 07:50:13 -- nvmf/common.sh@295 -- # e810=() 00:31:10.205 07:50:13 -- nvmf/common.sh@295 -- # local -ga e810 00:31:10.205 07:50:13 -- nvmf/common.sh@296 -- # x722=() 00:31:10.205 07:50:13 -- nvmf/common.sh@296 -- # local -ga x722 00:31:10.205 07:50:13 -- nvmf/common.sh@297 -- # mlx=() 00:31:10.205 07:50:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:10.205 07:50:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:10.205 07:50:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:10.205 07:50:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:10.205 07:50:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:10.205 07:50:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:10.205 07:50:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:10.205 07:50:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:10.205 07:50:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:10.205 07:50:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:10.205 07:50:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:10.205 07:50:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:10.205 07:50:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:10.205 07:50:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:10.205 07:50:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:10.205 07:50:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:10.205 07:50:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:10.205 07:50:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:10.205 07:50:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:10.205 07:50:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:10.205 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:10.205 07:50:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:10.205 07:50:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:10.205 07:50:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.205 07:50:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.205 07:50:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:10.205 07:50:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:10.205 07:50:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:10.205 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:10.205 07:50:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:10.205 07:50:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:10.205 07:50:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.205 07:50:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.205 07:50:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:10.205 07:50:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:10.205 07:50:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:10.205 07:50:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:10.205 07:50:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:10.205 07:50:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.205 07:50:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:10.205 07:50:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.205 07:50:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:10.206 Found net devices under 0000:af:00.0: cvl_0_0 00:31:10.206 07:50:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.206 07:50:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:10.206 07:50:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.206 07:50:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:10.206 07:50:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.206 07:50:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:10.206 Found net devices under 0000:af:00.1: cvl_0_1 00:31:10.206 07:50:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.206 07:50:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:10.206 07:50:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:10.206 07:50:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:10.206 07:50:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:10.206 07:50:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:10.206 07:50:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:10.206 07:50:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:10.206 07:50:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:10.206 07:50:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:10.206 07:50:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:10.206 07:50:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:10.206 07:50:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:10.206 07:50:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:10.206 07:50:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:10.206 07:50:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:10.206 07:50:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:10.206 07:50:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:10.206 07:50:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:10.206 07:50:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:10.206 07:50:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:10.206 07:50:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:10.206 07:50:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:10.206 07:50:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:10.206 07:50:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:10.206 07:50:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:10.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:10.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:31:10.206 00:31:10.206 --- 10.0.0.2 ping statistics --- 00:31:10.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.206 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:31:10.206 07:50:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:10.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:10.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:31:10.206 00:31:10.206 --- 10.0.0.1 ping statistics --- 00:31:10.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.206 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:31:10.206 07:50:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:10.206 07:50:13 -- nvmf/common.sh@410 -- # return 0 00:31:10.206 07:50:13 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:31:10.206 07:50:13 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:12.741 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:31:12.741 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:12.741 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:31:12.741 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:31:12.741 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:31:12.741 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:31:12.741 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:31:12.741 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:31:12.741 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:31:12.741 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:31:12.741 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:31:12.741 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:31:12.741 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:31:12.741 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:31:12.742 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:31:12.742 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:31:12.742 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:31:12.742 07:50:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:12.742 07:50:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:12.742 07:50:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:12.742 07:50:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:12.742 07:50:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:12.742 07:50:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:12.742 07:50:16 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:12.742 07:50:16 -- target/dif.sh@137 -- # nvmfappstart 00:31:12.742 07:50:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:12.742 07:50:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:12.742 07:50:16 -- common/autotest_common.sh@10 -- # set +x 00:31:12.742 07:50:16 -- nvmf/common.sh@469 -- # nvmfpid=121049 00:31:12.742 07:50:16 -- nvmf/common.sh@470 -- # waitforlisten 121049 00:31:12.742 07:50:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:12.742 07:50:16 -- common/autotest_common.sh@819 -- # '[' -z 121049 ']' 00:31:12.742 07:50:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.742 07:50:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:12.742 07:50:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.742 07:50:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:12.742 07:50:16 -- common/autotest_common.sh@10 -- # set +x 00:31:12.742 [2024-10-07 07:50:16.343456] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:12.742 [2024-10-07 07:50:16.343493] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:12.742 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.742 [2024-10-07 07:50:16.401305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.742 [2024-10-07 07:50:16.478197] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:12.742 [2024-10-07 07:50:16.478304] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:12.742 [2024-10-07 07:50:16.478313] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:12.742 [2024-10-07 07:50:16.478320] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:12.742 [2024-10-07 07:50:16.478341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.310 07:50:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:13.310 07:50:17 -- common/autotest_common.sh@852 -- # return 0 00:31:13.310 07:50:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:13.310 07:50:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:13.310 07:50:17 -- common/autotest_common.sh@10 -- # set +x 00:31:13.310 07:50:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:13.310 07:50:17 -- target/dif.sh@139 -- # create_transport 00:31:13.310 07:50:17 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:13.310 07:50:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.310 07:50:17 -- common/autotest_common.sh@10 -- # set +x 00:31:13.310 [2024-10-07 07:50:17.187896] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:13.310 07:50:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.310 07:50:17 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:13.310 07:50:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:13.310 07:50:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:13.310 07:50:17 -- common/autotest_common.sh@10 -- # set +x 00:31:13.310 ************************************ 00:31:13.310 START TEST fio_dif_1_default 00:31:13.310 ************************************ 00:31:13.310 07:50:17 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:31:13.310 07:50:17 -- target/dif.sh@86 -- # create_subsystems 0 00:31:13.310 07:50:17 -- target/dif.sh@28 -- # local sub 00:31:13.310 07:50:17 -- target/dif.sh@30 -- # for sub in "$@" 00:31:13.310 07:50:17 -- target/dif.sh@31 -- # create_subsystem 0 00:31:13.310 07:50:17 -- target/dif.sh@18 -- # local sub_id=0 00:31:13.310 07:50:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:13.310 07:50:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.310 07:50:17 -- common/autotest_common.sh@10 -- # set +x 00:31:13.310 bdev_null0 00:31:13.310 07:50:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.310 07:50:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:13.310 07:50:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.310 07:50:17 -- common/autotest_common.sh@10 -- # set +x 00:31:13.310 07:50:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.310 07:50:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:13.310 07:50:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.310 07:50:17 -- common/autotest_common.sh@10 -- # set +x 00:31:13.310 07:50:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.310 07:50:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:13.310 07:50:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.310 07:50:17 -- common/autotest_common.sh@10 -- # set +x 00:31:13.310 [2024-10-07 07:50:17.224147] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.310 07:50:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.310 07:50:17 -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:13.310 07:50:17 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.310 07:50:17 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.310 07:50:17 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:13.310 07:50:17 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:13.310 07:50:17 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:13.310 07:50:17 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.310 07:50:17 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:13.310 07:50:17 -- common/autotest_common.sh@1320 -- # shift 00:31:13.310 07:50:17 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:13.310 07:50:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.310 07:50:17 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:13.310 07:50:17 -- target/dif.sh@82 -- # gen_fio_conf 00:31:13.310 07:50:17 -- nvmf/common.sh@520 -- # config=() 00:31:13.310 07:50:17 -- target/dif.sh@54 -- # local file 00:31:13.310 07:50:17 -- nvmf/common.sh@520 -- # local subsystem config 00:31:13.310 07:50:17 -- target/dif.sh@56 -- # cat 00:31:13.310 07:50:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:13.310 07:50:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:13.310 { 00:31:13.310 "params": { 00:31:13.310 "name": "Nvme$subsystem", 00:31:13.310 "trtype": "$TEST_TRANSPORT", 00:31:13.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:13.310 "adrfam": "ipv4", 00:31:13.310 "trsvcid": "$NVMF_PORT", 00:31:13.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:13.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:13.310 "hdgst": ${hdgst:-false}, 00:31:13.310 "ddgst": ${ddgst:-false} 00:31:13.310 }, 00:31:13.310 "method": "bdev_nvme_attach_controller" 00:31:13.310 } 00:31:13.310 EOF 00:31:13.310 )") 00:31:13.310 07:50:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.310 07:50:17 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:13.310 07:50:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:13.310 07:50:17 -- nvmf/common.sh@542 -- # cat 00:31:13.310 07:50:17 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:13.310 07:50:17 -- target/dif.sh@72 -- # (( file <= files )) 00:31:13.310 07:50:17 -- nvmf/common.sh@544 -- # jq . 00:31:13.310 07:50:17 -- nvmf/common.sh@545 -- # IFS=, 00:31:13.310 07:50:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:13.310 "params": { 00:31:13.310 "name": "Nvme0", 00:31:13.310 "trtype": "tcp", 00:31:13.310 "traddr": "10.0.0.2", 00:31:13.310 "adrfam": "ipv4", 00:31:13.310 "trsvcid": "4420", 00:31:13.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:13.310 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:13.310 "hdgst": false, 00:31:13.310 "ddgst": false 00:31:13.310 }, 00:31:13.310 "method": "bdev_nvme_attach_controller" 00:31:13.310 }' 00:31:13.310 07:50:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:13.310 07:50:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:13.310 07:50:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.310 07:50:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.310 07:50:17 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:13.310 07:50:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:13.585 07:50:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:13.585 07:50:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:13.585 07:50:17 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:13.585 07:50:17 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.843 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:13.843 fio-3.35 00:31:13.843 Starting 1 thread 00:31:13.843 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.100 [2024-10-07 07:50:17.886301] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:14.100 [2024-10-07 07:50:17.886344] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:24.059 00:31:24.059 filename0: (groupid=0, jobs=1): err= 0: pid=121433: Mon Oct 7 07:50:28 2024 00:31:24.059 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10004msec) 00:31:24.059 slat (nsec): min=5655, max=25143, avg=6059.10, stdev=1333.66 00:31:24.059 clat (usec): min=640, max=43048, avg=21087.20, stdev=20252.71 00:31:24.059 lat (usec): min=646, max=43073, avg=21093.26, stdev=20252.79 00:31:24.059 clat percentiles (usec): 00:31:24.059 | 1.00th=[ 676], 5.00th=[ 685], 10.00th=[ 701], 20.00th=[ 750], 00:31:24.059 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[41157], 60.00th=[41157], 00:31:24.059 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:24.059 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:31:24.059 | 99.99th=[43254] 00:31:24.060 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=759.58, stdev=25.78, samples=19 00:31:24.060 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:31:24.060 lat (usec) : 750=20.99%, 1000=28.38% 00:31:24.060 lat (msec) : 2=0.42%, 50=50.21% 00:31:24.060 cpu : usr=92.77%, sys=6.97%, ctx=14, majf=0, minf=247 00:31:24.060 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.060 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.060 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:24.060 00:31:24.060 Run status group 0 (all jobs): 00:31:24.060 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10004-10004msec 00:31:24.318 07:50:28 -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:24.318 07:50:28 -- target/dif.sh@43 -- # local sub 00:31:24.318 07:50:28 -- target/dif.sh@45 -- # for sub in "$@" 00:31:24.318 07:50:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:24.318 07:50:28 -- target/dif.sh@36 -- # local sub_id=0 00:31:24.318 07:50:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:24.318 07:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.318 07:50:28 -- common/autotest_common.sh@10 -- # set +x 00:31:24.318 07:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.318 07:50:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:24.318 07:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.318 07:50:28 -- common/autotest_common.sh@10 -- # set +x 00:31:24.318 07:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.318 00:31:24.318 real 0m10.996s 00:31:24.318 user 0m15.842s 00:31:24.318 sys 0m0.962s 00:31:24.318 07:50:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:24.318 07:50:28 -- common/autotest_common.sh@10 -- # set +x 00:31:24.318 ************************************ 00:31:24.318 END TEST fio_dif_1_default 00:31:24.318 ************************************ 00:31:24.318 07:50:28 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:24.318 07:50:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:24.318 07:50:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:24.318 07:50:28 -- common/autotest_common.sh@10 -- # set +x 00:31:24.318 ************************************ 00:31:24.318 START TEST fio_dif_1_multi_subsystems 00:31:24.318 ************************************ 00:31:24.318 07:50:28 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:31:24.318 07:50:28 -- target/dif.sh@92 -- # local files=1 00:31:24.318 07:50:28 -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:24.318 07:50:28 -- target/dif.sh@28 -- # local sub 00:31:24.318 07:50:28 -- target/dif.sh@30 -- # for sub in "$@" 00:31:24.318 07:50:28 -- target/dif.sh@31 -- # create_subsystem 0 00:31:24.318 07:50:28 -- target/dif.sh@18 -- # local sub_id=0 00:31:24.318 07:50:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:24.318 07:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.318 07:50:28 -- common/autotest_common.sh@10 -- # set +x 00:31:24.318 bdev_null0 00:31:24.318 07:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.318 07:50:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:24.318 07:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.318 07:50:28 -- common/autotest_common.sh@10 -- # set +x 00:31:24.318 07:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.318 07:50:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:24.318 07:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.318 07:50:28 -- common/autotest_common.sh@10 -- # set +x 00:31:24.318 07:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.318 07:50:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:24.318 07:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.318 07:50:28 -- common/autotest_common.sh@10 -- # set +x 00:31:24.318 [2024-10-07 07:50:28.265333] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.318 07:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.318 07:50:28 -- target/dif.sh@30 -- # for sub in "$@" 00:31:24.318 07:50:28 -- target/dif.sh@31 -- # create_subsystem 1 00:31:24.318 07:50:28 -- target/dif.sh@18 -- # local sub_id=1 00:31:24.318 07:50:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:24.318 07:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.318 07:50:28 -- common/autotest_common.sh@10 -- # set +x 00:31:24.318 bdev_null1 00:31:24.318 07:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.318 07:50:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:24.318 07:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.318 07:50:28 -- common/autotest_common.sh@10 -- # set +x 00:31:24.318 07:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.318 07:50:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:24.318 07:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.318 07:50:28 -- common/autotest_common.sh@10 -- # set +x 00:31:24.578 07:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.578 07:50:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:24.578 07:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.578 07:50:28 -- common/autotest_common.sh@10 -- # set +x 00:31:24.578 07:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.578 07:50:28 -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:24.578 07:50:28 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:24.578 07:50:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.578 07:50:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:24.578 07:50:28 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.578 07:50:28 -- nvmf/common.sh@520 -- # config=() 00:31:24.578 07:50:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:24.578 07:50:28 -- nvmf/common.sh@520 -- # local subsystem config 00:31:24.578 07:50:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:24.578 07:50:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:24.578 07:50:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:24.578 07:50:28 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.578 07:50:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:24.578 { 00:31:24.578 "params": { 00:31:24.578 "name": "Nvme$subsystem", 00:31:24.578 "trtype": "$TEST_TRANSPORT", 00:31:24.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:24.578 "adrfam": "ipv4", 00:31:24.578 "trsvcid": "$NVMF_PORT", 00:31:24.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:24.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:24.578 "hdgst": ${hdgst:-false}, 00:31:24.578 "ddgst": ${ddgst:-false} 00:31:24.578 }, 00:31:24.578 "method": "bdev_nvme_attach_controller" 00:31:24.578 } 00:31:24.578 EOF 00:31:24.578 )") 00:31:24.578 07:50:28 -- common/autotest_common.sh@1320 -- # shift 00:31:24.578 07:50:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:24.578 07:50:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.578 07:50:28 -- target/dif.sh@82 -- # gen_fio_conf 00:31:24.578 07:50:28 -- target/dif.sh@54 -- # local file 00:31:24.578 07:50:28 -- target/dif.sh@56 -- # cat 00:31:24.578 07:50:28 -- nvmf/common.sh@542 -- # cat 00:31:24.578 07:50:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.578 07:50:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:24.578 07:50:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:24.578 07:50:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:24.578 07:50:28 -- target/dif.sh@72 -- # (( file <= files )) 00:31:24.578 07:50:28 -- target/dif.sh@73 -- # cat 00:31:24.578 07:50:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:24.578 07:50:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:24.578 { 00:31:24.578 "params": { 00:31:24.578 "name": "Nvme$subsystem", 00:31:24.578 "trtype": "$TEST_TRANSPORT", 00:31:24.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:24.578 "adrfam": "ipv4", 00:31:24.578 "trsvcid": "$NVMF_PORT", 00:31:24.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:24.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:24.578 "hdgst": ${hdgst:-false}, 00:31:24.578 "ddgst": ${ddgst:-false} 00:31:24.578 }, 00:31:24.578 "method": "bdev_nvme_attach_controller" 00:31:24.578 } 00:31:24.578 EOF 00:31:24.578 )") 00:31:24.578 07:50:28 -- target/dif.sh@72 -- # (( file++ )) 00:31:24.578 07:50:28 -- target/dif.sh@72 -- # (( file <= files )) 00:31:24.578 07:50:28 -- nvmf/common.sh@542 -- # cat 00:31:24.578 07:50:28 -- nvmf/common.sh@544 -- # jq . 00:31:24.578 07:50:28 -- nvmf/common.sh@545 -- # IFS=, 00:31:24.578 07:50:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:24.578 "params": { 00:31:24.578 "name": "Nvme0", 00:31:24.578 "trtype": "tcp", 00:31:24.578 "traddr": "10.0.0.2", 00:31:24.578 "adrfam": "ipv4", 00:31:24.578 "trsvcid": "4420", 00:31:24.578 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:24.578 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:24.578 "hdgst": false, 00:31:24.578 "ddgst": false 00:31:24.578 }, 00:31:24.578 "method": "bdev_nvme_attach_controller" 00:31:24.578 },{ 00:31:24.578 "params": { 00:31:24.578 "name": "Nvme1", 00:31:24.578 "trtype": "tcp", 00:31:24.578 "traddr": "10.0.0.2", 00:31:24.578 "adrfam": "ipv4", 00:31:24.578 "trsvcid": "4420", 00:31:24.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:24.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:24.578 "hdgst": false, 00:31:24.578 "ddgst": false 00:31:24.578 }, 00:31:24.578 "method": "bdev_nvme_attach_controller" 00:31:24.578 }' 00:31:24.578 07:50:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:24.578 07:50:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:24.578 07:50:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.578 07:50:28 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:24.578 07:50:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.578 07:50:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:24.578 07:50:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:24.578 07:50:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:24.578 07:50:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:24.578 07:50:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.835 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:24.836 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:24.836 fio-3.35 00:31:24.836 Starting 2 threads 00:31:24.836 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.401 [2024-10-07 07:50:29.306706] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:25.401 [2024-10-07 07:50:29.306750] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:37.599 00:31:37.599 filename0: (groupid=0, jobs=1): err= 0: pid=123409: Mon Oct 7 07:50:39 2024 00:31:37.599 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10019msec) 00:31:37.599 slat (nsec): min=5762, max=23439, avg=7377.79, stdev=2271.43 00:31:37.599 clat (usec): min=40804, max=44767, avg=41039.35, stdev=314.25 00:31:37.599 lat (usec): min=40810, max=44790, avg=41046.72, stdev=314.56 00:31:37.599 clat percentiles (usec): 00:31:37.599 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:37.599 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:37.599 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:37.599 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:31:37.599 | 99.99th=[44827] 00:31:37.599 bw ( KiB/s): min= 384, max= 416, per=33.79%, avg=388.80, stdev=11.72, samples=20 00:31:37.599 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:31:37.599 lat (msec) : 50=100.00% 00:31:37.599 cpu : usr=97.20%, sys=2.55%, ctx=16, majf=0, minf=211 00:31:37.599 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.599 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.599 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:37.599 filename1: (groupid=0, jobs=1): err= 0: pid=123410: Mon Oct 7 07:50:39 2024 00:31:37.599 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10005msec) 00:31:37.599 slat (nsec): min=5763, max=23966, avg=6900.00, stdev=1868.72 00:31:37.599 clat (usec): min=640, max=44312, avg=21042.33, stdev=20286.19 00:31:37.599 lat (usec): min=646, max=44336, avg=21049.23, stdev=20285.68 00:31:37.599 clat percentiles (usec): 00:31:37.599 | 1.00th=[ 652], 5.00th=[ 660], 10.00th=[ 660], 20.00th=[ 668], 00:31:37.599 | 30.00th=[ 693], 40.00th=[ 791], 50.00th=[41157], 60.00th=[41157], 00:31:37.599 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:37.599 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:31:37.599 | 99.99th=[44303] 00:31:37.599 bw ( KiB/s): min= 704, max= 768, per=66.28%, avg=761.26, stdev=20.18, samples=19 00:31:37.599 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:31:37.599 lat (usec) : 750=34.63%, 1000=15.21% 00:31:37.599 lat (msec) : 2=0.05%, 50=50.11% 00:31:37.599 cpu : usr=97.07%, sys=2.68%, ctx=10, majf=0, minf=104 00:31:37.599 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.599 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.599 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:37.599 00:31:37.599 Run status group 0 (all jobs): 00:31:37.599 READ: bw=1148KiB/s (1176kB/s), 390KiB/s-760KiB/s (399kB/s-778kB/s), io=11.2MiB (11.8MB), run=10005-10019msec 00:31:37.599 07:50:39 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:37.599 07:50:39 -- target/dif.sh@43 -- # local sub 00:31:37.599 07:50:39 -- target/dif.sh@45 -- # for sub in "$@" 00:31:37.599 07:50:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:37.599 07:50:39 -- target/dif.sh@36 -- # local sub_id=0 00:31:37.599 07:50:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:37.599 07:50:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.599 07:50:39 -- common/autotest_common.sh@10 -- # set +x 00:31:37.599 07:50:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.599 07:50:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:37.599 07:50:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.599 07:50:39 -- common/autotest_common.sh@10 -- # set +x 00:31:37.599 07:50:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.599 07:50:39 -- target/dif.sh@45 -- # for sub in "$@" 00:31:37.599 07:50:39 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:37.599 07:50:39 -- target/dif.sh@36 -- # local sub_id=1 00:31:37.599 07:50:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.599 07:50:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.599 07:50:39 -- common/autotest_common.sh@10 -- # set +x 00:31:37.599 07:50:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.599 07:50:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:37.599 07:50:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.599 07:50:39 -- common/autotest_common.sh@10 -- # set +x 00:31:37.599 07:50:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.599 00:31:37.599 real 0m11.410s 00:31:37.599 user 0m26.278s 00:31:37.599 sys 0m0.813s 00:31:37.599 07:50:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:37.599 07:50:39 -- common/autotest_common.sh@10 -- # set +x 00:31:37.599 ************************************ 00:31:37.599 END TEST fio_dif_1_multi_subsystems 00:31:37.599 ************************************ 00:31:37.599 07:50:39 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:37.599 07:50:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:37.599 07:50:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:37.599 07:50:39 -- common/autotest_common.sh@10 -- # set +x 00:31:37.599 ************************************ 00:31:37.599 START TEST fio_dif_rand_params 00:31:37.599 ************************************ 00:31:37.599 07:50:39 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:31:37.599 07:50:39 -- target/dif.sh@100 -- # local NULL_DIF 00:31:37.599 07:50:39 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:37.599 07:50:39 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:37.599 07:50:39 -- target/dif.sh@103 -- # bs=128k 00:31:37.599 07:50:39 -- target/dif.sh@103 -- # numjobs=3 00:31:37.599 07:50:39 -- target/dif.sh@103 -- # iodepth=3 00:31:37.599 07:50:39 -- target/dif.sh@103 -- # runtime=5 00:31:37.599 07:50:39 -- target/dif.sh@105 -- # create_subsystems 0 00:31:37.599 07:50:39 -- target/dif.sh@28 -- # local sub 00:31:37.599 07:50:39 -- target/dif.sh@30 -- # for sub in "$@" 00:31:37.599 07:50:39 -- target/dif.sh@31 -- # create_subsystem 0 00:31:37.599 07:50:39 -- target/dif.sh@18 -- # local sub_id=0 00:31:37.599 07:50:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:37.599 07:50:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.599 07:50:39 -- common/autotest_common.sh@10 -- # set +x 00:31:37.599 bdev_null0 00:31:37.599 07:50:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.599 07:50:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:37.599 07:50:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.599 07:50:39 -- common/autotest_common.sh@10 -- # set +x 00:31:37.599 07:50:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.599 07:50:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:37.599 07:50:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.599 07:50:39 -- common/autotest_common.sh@10 -- # set +x 00:31:37.599 07:50:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.599 07:50:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:37.599 07:50:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.599 07:50:39 -- common/autotest_common.sh@10 -- # set +x 00:31:37.599 [2024-10-07 07:50:39.720403] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.599 07:50:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.599 07:50:39 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:37.599 07:50:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:37.599 07:50:39 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:37.599 07:50:39 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:37.599 07:50:39 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:37.599 07:50:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:37.599 07:50:39 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:37.599 07:50:39 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:37.599 07:50:39 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:37.599 07:50:39 -- nvmf/common.sh@520 -- # config=() 00:31:37.599 07:50:39 -- target/dif.sh@82 -- # gen_fio_conf 00:31:37.599 07:50:39 -- common/autotest_common.sh@1320 -- # shift 00:31:37.599 07:50:39 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:37.599 07:50:39 -- nvmf/common.sh@520 -- # local subsystem config 00:31:37.599 07:50:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:37.599 07:50:39 -- target/dif.sh@54 -- # local file 00:31:37.599 07:50:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:37.599 07:50:39 -- target/dif.sh@56 -- # cat 00:31:37.599 07:50:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:37.599 { 00:31:37.599 "params": { 00:31:37.599 "name": "Nvme$subsystem", 00:31:37.599 "trtype": "$TEST_TRANSPORT", 00:31:37.599 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:37.599 "adrfam": "ipv4", 00:31:37.599 "trsvcid": "$NVMF_PORT", 00:31:37.599 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:37.599 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:37.599 "hdgst": ${hdgst:-false}, 00:31:37.599 "ddgst": ${ddgst:-false} 00:31:37.599 }, 00:31:37.599 "method": "bdev_nvme_attach_controller" 00:31:37.599 } 00:31:37.599 EOF 00:31:37.599 )") 00:31:37.599 07:50:39 -- nvmf/common.sh@542 -- # cat 00:31:37.599 07:50:39 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:37.599 07:50:39 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:37.599 07:50:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:37.599 07:50:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:37.599 07:50:39 -- target/dif.sh@72 -- # (( file <= files )) 00:31:37.599 07:50:39 -- nvmf/common.sh@544 -- # jq . 00:31:37.599 07:50:39 -- nvmf/common.sh@545 -- # IFS=, 00:31:37.600 07:50:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:37.600 "params": { 00:31:37.600 "name": "Nvme0", 00:31:37.600 "trtype": "tcp", 00:31:37.600 "traddr": "10.0.0.2", 00:31:37.600 "adrfam": "ipv4", 00:31:37.600 "trsvcid": "4420", 00:31:37.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:37.600 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:37.600 "hdgst": false, 00:31:37.600 "ddgst": false 00:31:37.600 }, 00:31:37.600 "method": "bdev_nvme_attach_controller" 00:31:37.600 }' 00:31:37.600 07:50:39 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:37.600 07:50:39 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:37.600 07:50:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:37.600 07:50:39 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:37.600 07:50:39 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:37.600 07:50:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:37.600 07:50:39 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:37.600 07:50:39 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:37.600 07:50:39 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:37.600 07:50:39 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:37.600 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:37.600 ... 00:31:37.600 fio-3.35 00:31:37.600 Starting 3 threads 00:31:37.600 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.600 [2024-10-07 07:50:40.513390] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:37.600 [2024-10-07 07:50:40.513438] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:41.786 00:31:41.786 filename0: (groupid=0, jobs=1): err= 0: pid=125313: Mon Oct 7 07:50:45 2024 00:31:41.786 read: IOPS=272, BW=34.1MiB/s (35.7MB/s)(172MiB/5047msec) 00:31:41.786 slat (nsec): min=6014, max=24397, avg=9380.75, stdev=2407.77 00:31:41.786 clat (usec): min=3666, max=91731, avg=10959.00, stdev=12905.88 00:31:41.786 lat (usec): min=3673, max=91743, avg=10968.38, stdev=12906.00 00:31:41.786 clat percentiles (usec): 00:31:41.786 | 1.00th=[ 4228], 5.00th=[ 4817], 10.00th=[ 5014], 20.00th=[ 5866], 00:31:41.786 | 30.00th=[ 6259], 40.00th=[ 6521], 50.00th=[ 6849], 60.00th=[ 7242], 00:31:41.786 | 70.00th=[ 8029], 80.00th=[ 8979], 90.00th=[10552], 95.00th=[47973], 00:31:41.786 | 99.00th=[50594], 99.50th=[51643], 99.90th=[89654], 99.95th=[91751], 00:31:41.786 | 99.99th=[91751] 00:31:41.786 bw ( KiB/s): min=25088, max=43008, per=33.51%, avg=35148.80, stdev=6025.58, samples=10 00:31:41.786 iops : min= 196, max= 336, avg=274.60, stdev=47.07, samples=10 00:31:41.786 lat (msec) : 4=0.29%, 10=88.44%, 20=1.89%, 50=7.56%, 100=1.82% 00:31:41.786 cpu : usr=94.85%, sys=4.80%, ctx=15, majf=0, minf=51 00:31:41.786 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.786 issued rwts: total=1376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.786 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:41.786 filename0: (groupid=0, jobs=1): err= 0: pid=125314: Mon Oct 7 07:50:45 2024 00:31:41.786 read: IOPS=216, BW=27.0MiB/s (28.3MB/s)(136MiB/5040msec) 00:31:41.786 slat (nsec): min=6021, max=23972, avg=9171.45, stdev=2557.56 00:31:41.786 clat (usec): min=4053, max=90039, avg=13858.44, stdev=15784.36 00:31:41.786 lat (usec): min=4059, max=90052, avg=13867.61, stdev=15784.48 00:31:41.786 clat percentiles (usec): 00:31:41.786 | 1.00th=[ 4555], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6259], 00:31:41.786 | 30.00th=[ 6652], 40.00th=[ 6980], 50.00th=[ 7308], 60.00th=[ 8094], 00:31:41.786 | 70.00th=[ 8979], 80.00th=[10159], 90.00th=[47973], 95.00th=[49546], 00:31:41.786 | 99.00th=[51119], 99.50th=[88605], 99.90th=[89654], 99.95th=[89654], 00:31:41.786 | 99.99th=[89654] 00:31:41.786 bw ( KiB/s): min=22272, max=33536, per=26.53%, avg=27827.20, stdev=3559.64, samples=10 00:31:41.786 iops : min= 174, max= 262, avg=217.40, stdev=27.81, samples=10 00:31:41.786 lat (msec) : 10=78.17%, 20=6.70%, 50=11.28%, 100=3.85% 00:31:41.786 cpu : usr=94.88%, sys=4.80%, ctx=10, majf=0, minf=162 00:31:41.786 IO depths : 1=3.9%, 2=96.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.786 issued rwts: total=1090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.786 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:41.786 filename0: (groupid=0, jobs=1): err= 0: pid=125315: Mon Oct 7 07:50:45 2024 00:31:41.786 read: IOPS=331, BW=41.4MiB/s (43.4MB/s)(209MiB/5045msec) 00:31:41.786 slat (nsec): min=6071, max=23824, avg=9211.51, stdev=2244.34 00:31:41.786 clat (usec): min=3773, max=89840, avg=9025.88, stdev=10056.79 00:31:41.786 lat (usec): min=3779, max=89852, avg=9035.09, stdev=10056.96 00:31:41.786 clat percentiles (usec): 00:31:41.786 | 1.00th=[ 3982], 5.00th=[ 4359], 10.00th=[ 4555], 20.00th=[ 5211], 00:31:41.786 | 30.00th=[ 6128], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 7177], 00:31:41.786 | 70.00th=[ 7635], 80.00th=[ 8586], 90.00th=[ 9765], 95.00th=[12387], 00:31:41.786 | 99.00th=[50070], 99.50th=[51119], 99.90th=[89654], 99.95th=[89654], 00:31:41.786 | 99.99th=[89654] 00:31:41.786 bw ( KiB/s): min=31488, max=55040, per=40.71%, avg=42700.80, stdev=7889.34, samples=10 00:31:41.786 iops : min= 246, max= 430, avg=333.60, stdev=61.64, samples=10 00:31:41.786 lat (msec) : 4=1.02%, 10=90.60%, 20=3.41%, 50=4.01%, 100=0.96% 00:31:41.786 cpu : usr=93.93%, sys=5.73%, ctx=8, majf=0, minf=67 00:31:41.786 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.786 issued rwts: total=1670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.786 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:41.786 00:31:41.786 Run status group 0 (all jobs): 00:31:41.786 READ: bw=102MiB/s (107MB/s), 27.0MiB/s-41.4MiB/s (28.3MB/s-43.4MB/s), io=517MiB (542MB), run=5040-5047msec 00:31:42.046 07:50:45 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:42.046 07:50:45 -- target/dif.sh@43 -- # local sub 00:31:42.046 07:50:45 -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.046 07:50:45 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:42.046 07:50:45 -- target/dif.sh@36 -- # local sub_id=0 00:31:42.046 07:50:45 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.046 07:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.046 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 07:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.046 07:50:45 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:42.046 07:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.046 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 07:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.046 07:50:45 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:42.046 07:50:45 -- target/dif.sh@109 -- # bs=4k 00:31:42.046 07:50:45 -- target/dif.sh@109 -- # numjobs=8 00:31:42.046 07:50:45 -- target/dif.sh@109 -- # iodepth=16 00:31:42.046 07:50:45 -- target/dif.sh@109 -- # runtime= 00:31:42.046 07:50:45 -- target/dif.sh@109 -- # files=2 00:31:42.046 07:50:45 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:42.046 07:50:45 -- target/dif.sh@28 -- # local sub 00:31:42.046 07:50:45 -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.046 07:50:45 -- target/dif.sh@31 -- # create_subsystem 0 00:31:42.046 07:50:45 -- target/dif.sh@18 -- # local sub_id=0 00:31:42.046 07:50:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:42.046 07:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.046 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 bdev_null0 00:31:42.046 07:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.046 07:50:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:42.046 07:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.046 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 07:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.046 07:50:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:42.046 07:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.046 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 07:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.046 07:50:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:42.046 07:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.046 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 [2024-10-07 07:50:45.910963] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.046 07:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.046 07:50:45 -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.046 07:50:45 -- target/dif.sh@31 -- # create_subsystem 1 00:31:42.046 07:50:45 -- target/dif.sh@18 -- # local sub_id=1 00:31:42.046 07:50:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:42.046 07:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.046 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 bdev_null1 00:31:42.046 07:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.046 07:50:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:42.046 07:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.046 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 07:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.046 07:50:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:42.046 07:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.046 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 07:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.046 07:50:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:42.046 07:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.046 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 07:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.046 07:50:45 -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.046 07:50:45 -- target/dif.sh@31 -- # create_subsystem 2 00:31:42.046 07:50:45 -- target/dif.sh@18 -- # local sub_id=2 00:31:42.046 07:50:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:42.046 07:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.046 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 bdev_null2 00:31:42.046 07:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.046 07:50:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:42.046 07:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.046 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 07:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.046 07:50:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:42.046 07:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.046 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 07:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.046 07:50:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:42.046 07:50:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.046 07:50:45 -- common/autotest_common.sh@10 -- # set +x 00:31:42.046 07:50:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.046 07:50:45 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:42.046 07:50:45 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:42.046 07:50:45 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:42.046 07:50:45 -- nvmf/common.sh@520 -- # config=() 00:31:42.046 07:50:45 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.046 07:50:45 -- nvmf/common.sh@520 -- # local subsystem config 00:31:42.046 07:50:45 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.046 07:50:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:42.046 07:50:45 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:42.046 07:50:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:42.046 { 00:31:42.046 "params": { 00:31:42.046 "name": "Nvme$subsystem", 00:31:42.046 "trtype": "$TEST_TRANSPORT", 00:31:42.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.046 "adrfam": "ipv4", 00:31:42.046 "trsvcid": "$NVMF_PORT", 00:31:42.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.046 "hdgst": ${hdgst:-false}, 00:31:42.046 "ddgst": ${ddgst:-false} 00:31:42.046 }, 00:31:42.046 "method": "bdev_nvme_attach_controller" 00:31:42.046 } 00:31:42.046 EOF 00:31:42.046 )") 00:31:42.046 07:50:45 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:42.046 07:50:45 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:42.046 07:50:45 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.046 07:50:45 -- target/dif.sh@82 -- # gen_fio_conf 00:31:42.046 07:50:45 -- common/autotest_common.sh@1320 -- # shift 00:31:42.046 07:50:45 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:42.046 07:50:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.046 07:50:45 -- target/dif.sh@54 -- # local file 00:31:42.046 07:50:45 -- target/dif.sh@56 -- # cat 00:31:42.046 07:50:45 -- nvmf/common.sh@542 -- # cat 00:31:42.046 07:50:45 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.046 07:50:45 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:42.046 07:50:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:42.046 07:50:45 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:42.046 07:50:45 -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.047 07:50:45 -- target/dif.sh@73 -- # cat 00:31:42.047 07:50:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:42.047 07:50:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:42.047 { 00:31:42.047 "params": { 00:31:42.047 "name": "Nvme$subsystem", 00:31:42.047 "trtype": "$TEST_TRANSPORT", 00:31:42.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.047 "adrfam": "ipv4", 00:31:42.047 "trsvcid": "$NVMF_PORT", 00:31:42.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.047 "hdgst": ${hdgst:-false}, 00:31:42.047 "ddgst": ${ddgst:-false} 00:31:42.047 }, 00:31:42.047 "method": "bdev_nvme_attach_controller" 00:31:42.047 } 00:31:42.047 EOF 00:31:42.047 )") 00:31:42.047 07:50:45 -- nvmf/common.sh@542 -- # cat 00:31:42.047 07:50:45 -- target/dif.sh@72 -- # (( file++ )) 00:31:42.047 07:50:45 -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.047 07:50:45 -- target/dif.sh@73 -- # cat 00:31:42.047 07:50:45 -- target/dif.sh@72 -- # (( file++ )) 00:31:42.047 07:50:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:42.047 07:50:45 -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.047 07:50:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:42.047 { 00:31:42.047 "params": { 00:31:42.047 "name": "Nvme$subsystem", 00:31:42.047 "trtype": "$TEST_TRANSPORT", 00:31:42.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.047 "adrfam": "ipv4", 00:31:42.047 "trsvcid": "$NVMF_PORT", 00:31:42.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.047 "hdgst": ${hdgst:-false}, 00:31:42.047 "ddgst": ${ddgst:-false} 00:31:42.047 }, 00:31:42.047 "method": "bdev_nvme_attach_controller" 00:31:42.047 } 00:31:42.047 EOF 00:31:42.047 )") 00:31:42.047 07:50:45 -- nvmf/common.sh@542 -- # cat 00:31:42.047 07:50:45 -- nvmf/common.sh@544 -- # jq . 00:31:42.047 07:50:46 -- nvmf/common.sh@545 -- # IFS=, 00:31:42.047 07:50:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:42.047 "params": { 00:31:42.047 "name": "Nvme0", 00:31:42.047 "trtype": "tcp", 00:31:42.047 "traddr": "10.0.0.2", 00:31:42.047 "adrfam": "ipv4", 00:31:42.047 "trsvcid": "4420", 00:31:42.047 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.047 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:42.047 "hdgst": false, 00:31:42.047 "ddgst": false 00:31:42.047 }, 00:31:42.047 "method": "bdev_nvme_attach_controller" 00:31:42.047 },{ 00:31:42.047 "params": { 00:31:42.047 "name": "Nvme1", 00:31:42.047 "trtype": "tcp", 00:31:42.047 "traddr": "10.0.0.2", 00:31:42.047 "adrfam": "ipv4", 00:31:42.047 "trsvcid": "4420", 00:31:42.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:42.047 "hdgst": false, 00:31:42.047 "ddgst": false 00:31:42.047 }, 00:31:42.047 "method": "bdev_nvme_attach_controller" 00:31:42.047 },{ 00:31:42.047 "params": { 00:31:42.047 "name": "Nvme2", 00:31:42.047 "trtype": "tcp", 00:31:42.047 "traddr": "10.0.0.2", 00:31:42.047 "adrfam": "ipv4", 00:31:42.047 "trsvcid": "4420", 00:31:42.047 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:42.047 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:42.047 "hdgst": false, 00:31:42.047 "ddgst": false 00:31:42.047 }, 00:31:42.047 "method": "bdev_nvme_attach_controller" 00:31:42.047 }' 00:31:42.047 07:50:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:42.047 07:50:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:42.047 07:50:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.047 07:50:46 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.322 07:50:46 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:42.322 07:50:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:42.322 07:50:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:42.322 07:50:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:42.322 07:50:46 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:42.322 07:50:46 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.583 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:42.583 ... 00:31:42.583 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:42.583 ... 00:31:42.583 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:42.583 ... 00:31:42.583 fio-3.35 00:31:42.583 Starting 24 threads 00:31:42.583 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.516 [2024-10-07 07:50:47.243444] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:43.516 [2024-10-07 07:50:47.243487] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:53.485 00:31:53.485 filename0: (groupid=0, jobs=1): err= 0: pid=126570: Mon Oct 7 07:50:57 2024 00:31:53.485 read: IOPS=564, BW=2257KiB/s (2311kB/s)(22.1MiB/10009msec) 00:31:53.485 slat (usec): min=7, max=112, avg=28.06, stdev=17.71 00:31:53.485 clat (usec): min=13411, max=43550, avg=28112.03, stdev=1423.58 00:31:53.485 lat (usec): min=13418, max=43611, avg=28140.08, stdev=1422.74 00:31:53.485 clat percentiles (usec): 00:31:53.485 | 1.00th=[25560], 5.00th=[26870], 10.00th=[27395], 20.00th=[27657], 00:31:53.485 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:31:53.485 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:31:53.485 | 99.00th=[30802], 99.50th=[32637], 99.90th=[42206], 99.95th=[42730], 00:31:53.485 | 99.99th=[43779] 00:31:53.485 bw ( KiB/s): min= 2176, max= 2304, per=4.18%, avg=2252.80, stdev=64.34, samples=20 00:31:53.485 iops : min= 544, max= 576, avg=563.20, stdev=16.08, samples=20 00:31:53.485 lat (msec) : 20=0.39%, 50=99.61% 00:31:53.485 cpu : usr=98.80%, sys=0.79%, ctx=17, majf=0, minf=54 00:31:53.485 IO depths : 1=5.4%, 2=11.3%, 4=24.5%, 8=51.7%, 16=7.1%, 32=0.0%, >=64=0.0% 00:31:53.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.485 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.485 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.485 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.485 filename0: (groupid=0, jobs=1): err= 0: pid=126571: Mon Oct 7 07:50:57 2024 00:31:53.485 read: IOPS=527, BW=2110KiB/s (2160kB/s)(20.6MiB/10003msec) 00:31:53.485 slat (usec): min=7, max=115, avg=35.60, stdev=22.55 00:31:53.485 clat (usec): min=6694, max=84819, avg=30051.05, stdev=5777.91 00:31:53.485 lat (usec): min=6705, max=84865, avg=30086.65, stdev=5772.96 00:31:53.485 clat percentiles (usec): 00:31:53.485 | 1.00th=[19792], 5.00th=[26608], 10.00th=[27657], 20.00th=[27657], 00:31:53.485 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:31:53.485 | 70.00th=[28705], 80.00th=[30802], 90.00th=[37487], 95.00th=[43779], 00:31:53.485 | 99.00th=[47973], 99.50th=[49546], 99.90th=[71828], 99.95th=[71828], 00:31:53.485 | 99.99th=[84411] 00:31:53.485 bw ( KiB/s): min= 1664, max= 2304, per=3.88%, avg=2093.47, stdev=239.56, samples=19 00:31:53.485 iops : min= 416, max= 576, avg=523.37, stdev=59.89, samples=19 00:31:53.485 lat (msec) : 10=0.72%, 20=0.44%, 50=98.43%, 100=0.42% 00:31:53.485 cpu : usr=98.68%, sys=0.91%, ctx=18, majf=0, minf=67 00:31:53.485 IO depths : 1=2.9%, 2=6.3%, 4=18.5%, 8=61.7%, 16=10.7%, 32=0.0%, >=64=0.0% 00:31:53.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.485 complete : 0=0.0%, 4=93.1%, 8=2.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.485 issued rwts: total=5276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.485 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.485 filename0: (groupid=0, jobs=1): err= 0: pid=126572: Mon Oct 7 07:50:57 2024 00:31:53.485 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10004msec) 00:31:53.485 slat (usec): min=4, max=131, avg=45.78, stdev=20.16 00:31:53.485 clat (usec): min=8091, max=51027, avg=28113.51, stdev=2884.22 00:31:53.485 lat (usec): min=8106, max=51044, avg=28159.29, stdev=2883.31 00:31:53.485 clat percentiles (usec): 00:31:53.485 | 1.00th=[18744], 5.00th=[26346], 10.00th=[27132], 20.00th=[27657], 00:31:53.485 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:31:53.485 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:31:53.485 | 99.00th=[41157], 99.50th=[47973], 99.90th=[50594], 99.95th=[51119], 00:31:53.485 | 99.99th=[51119] 00:31:53.485 bw ( KiB/s): min= 2052, max= 2304, per=4.15%, avg=2237.68, stdev=78.99, samples=19 00:31:53.485 iops : min= 513, max= 576, avg=559.42, stdev=19.75, samples=19 00:31:53.485 lat (msec) : 10=0.36%, 20=0.68%, 50=98.83%, 100=0.14% 00:31:53.485 cpu : usr=98.82%, sys=0.77%, ctx=15, majf=0, minf=56 00:31:53.485 IO depths : 1=5.3%, 2=10.9%, 4=23.2%, 8=53.1%, 16=7.6%, 32=0.0%, >=64=0.0% 00:31:53.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.485 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.485 issued rwts: total=5618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.485 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.485 filename0: (groupid=0, jobs=1): err= 0: pid=126573: Mon Oct 7 07:50:57 2024 00:31:53.485 read: IOPS=563, BW=2254KiB/s (2309kB/s)(22.1MiB/10021msec) 00:31:53.485 slat (usec): min=6, max=207, avg=41.14, stdev=25.06 00:31:53.485 clat (usec): min=21022, max=45131, avg=28016.83, stdev=1180.08 00:31:53.485 lat (usec): min=21033, max=45147, avg=28057.97, stdev=1179.46 00:31:53.485 clat percentiles (usec): 00:31:53.485 | 1.00th=[25297], 5.00th=[26870], 10.00th=[27395], 20.00th=[27657], 00:31:53.485 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:31:53.485 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:31:53.485 | 99.00th=[31851], 99.50th=[32900], 99.90th=[42206], 99.95th=[44827], 00:31:53.485 | 99.99th=[45351] 00:31:53.485 bw ( KiB/s): min= 2176, max= 2304, per=4.18%, avg=2252.80, stdev=61.33, samples=20 00:31:53.485 iops : min= 544, max= 576, avg=563.20, stdev=15.33, samples=20 00:31:53.485 lat (msec) : 50=100.00% 00:31:53.485 cpu : usr=97.08%, sys=1.63%, ctx=192, majf=0, minf=91 00:31:53.485 IO depths : 1=5.1%, 2=10.5%, 4=22.5%, 8=54.0%, 16=7.9%, 32=0.0%, >=64=0.0% 00:31:53.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.485 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.485 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.485 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.485 filename0: (groupid=0, jobs=1): err= 0: pid=126574: Mon Oct 7 07:50:57 2024 00:31:53.485 read: IOPS=565, BW=2263KiB/s (2317kB/s)(22.1MiB/10020msec) 00:31:53.485 slat (usec): min=7, max=112, avg=15.90, stdev= 9.50 00:31:53.485 clat (usec): min=3322, max=44956, avg=28103.20, stdev=1806.61 00:31:53.485 lat (usec): min=3332, max=44990, avg=28119.11, stdev=1806.58 00:31:53.485 clat percentiles (usec): 00:31:53.485 | 1.00th=[21890], 5.00th=[26608], 10.00th=[27395], 20.00th=[27919], 00:31:53.485 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:31:53.486 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28967], 95.00th=[28967], 00:31:53.486 | 99.00th=[30016], 99.50th=[35914], 99.90th=[42206], 99.95th=[42206], 00:31:53.486 | 99.99th=[44827] 00:31:53.486 bw ( KiB/s): min= 2176, max= 2432, per=4.20%, avg=2265.60, stdev=73.12, samples=20 00:31:53.486 iops : min= 544, max= 608, avg=566.40, stdev=18.28, samples=20 00:31:53.486 lat (msec) : 4=0.16%, 10=0.12%, 20=0.19%, 50=99.52% 00:31:53.486 cpu : usr=98.51%, sys=1.09%, ctx=17, majf=0, minf=61 00:31:53.486 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:53.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 issued rwts: total=5668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.486 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.486 filename0: (groupid=0, jobs=1): err= 0: pid=126575: Mon Oct 7 07:50:57 2024 00:31:53.486 read: IOPS=563, BW=2255KiB/s (2309kB/s)(22.0MiB/10009msec) 00:31:53.486 slat (usec): min=5, max=108, avg=36.22, stdev=21.91 00:31:53.486 clat (usec): min=8969, max=71770, avg=28105.28, stdev=3457.74 00:31:53.486 lat (usec): min=8985, max=71787, avg=28141.49, stdev=3457.10 00:31:53.486 clat percentiles (usec): 00:31:53.486 | 1.00th=[17171], 5.00th=[24511], 10.00th=[26608], 20.00th=[27395], 00:31:53.486 | 30.00th=[27657], 40.00th=[27919], 50.00th=[28181], 60.00th=[28181], 00:31:53.486 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28967], 95.00th=[31065], 00:31:53.486 | 99.00th=[42730], 99.50th=[45351], 99.90th=[58983], 99.95th=[71828], 00:31:53.486 | 99.99th=[71828] 00:31:53.486 bw ( KiB/s): min= 1920, max= 2400, per=4.18%, avg=2254.32, stdev=109.55, samples=19 00:31:53.486 iops : min= 480, max= 600, avg=563.58, stdev=27.39, samples=19 00:31:53.486 lat (msec) : 10=0.14%, 20=2.00%, 50=97.57%, 100=0.28% 00:31:53.486 cpu : usr=98.54%, sys=1.05%, ctx=12, majf=0, minf=57 00:31:53.486 IO depths : 1=3.5%, 2=7.8%, 4=18.8%, 8=60.0%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:53.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 complete : 0=0.0%, 4=92.8%, 8=2.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 issued rwts: total=5642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.486 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.486 filename0: (groupid=0, jobs=1): err= 0: pid=126576: Mon Oct 7 07:50:57 2024 00:31:53.486 read: IOPS=567, BW=2268KiB/s (2323kB/s)(22.2MiB/10016msec) 00:31:53.486 slat (usec): min=7, max=110, avg=19.51, stdev=11.66 00:31:53.486 clat (usec): min=5933, max=41571, avg=28060.42, stdev=2181.88 00:31:53.486 lat (usec): min=5958, max=41594, avg=28079.93, stdev=2181.93 00:31:53.486 clat percentiles (usec): 00:31:53.486 | 1.00th=[24511], 5.00th=[26608], 10.00th=[27395], 20.00th=[27919], 00:31:53.486 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:31:53.486 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:31:53.486 | 99.00th=[31065], 99.50th=[31327], 99.90th=[41681], 99.95th=[41681], 00:31:53.486 | 99.99th=[41681] 00:31:53.486 bw ( KiB/s): min= 2176, max= 2432, per=4.20%, avg=2265.60, stdev=71.82, samples=20 00:31:53.486 iops : min= 544, max= 608, avg=566.40, stdev=17.95, samples=20 00:31:53.486 lat (msec) : 10=0.85%, 50=99.15% 00:31:53.486 cpu : usr=98.49%, sys=1.11%, ctx=14, majf=0, minf=49 00:31:53.486 IO depths : 1=5.2%, 2=11.1%, 4=24.3%, 8=52.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:31:53.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.486 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.486 filename0: (groupid=0, jobs=1): err= 0: pid=126577: Mon Oct 7 07:50:57 2024 00:31:53.486 read: IOPS=563, BW=2253KiB/s (2307kB/s)(22.0MiB/10008msec) 00:31:53.486 slat (usec): min=4, max=112, avg=46.34, stdev=18.30 00:31:53.486 clat (usec): min=8215, max=72755, avg=27994.77, stdev=2207.17 00:31:53.486 lat (usec): min=8244, max=72770, avg=28041.11, stdev=2205.62 00:31:53.486 clat percentiles (usec): 00:31:53.486 | 1.00th=[25560], 5.00th=[26608], 10.00th=[27132], 20.00th=[27657], 00:31:53.486 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:31:53.486 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:31:53.486 | 99.00th=[30016], 99.50th=[42206], 99.90th=[52691], 99.95th=[72877], 00:31:53.486 | 99.99th=[72877] 00:31:53.486 bw ( KiB/s): min= 1968, max= 2304, per=4.16%, avg=2245.89, stdev=90.39, samples=19 00:31:53.486 iops : min= 492, max= 576, avg=561.47, stdev=22.60, samples=19 00:31:53.486 lat (msec) : 10=0.28%, 20=0.12%, 50=99.31%, 100=0.28% 00:31:53.486 cpu : usr=98.64%, sys=0.96%, ctx=14, majf=0, minf=53 00:31:53.486 IO depths : 1=6.0%, 2=12.0%, 4=24.4%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:53.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 issued rwts: total=5638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.486 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.486 filename1: (groupid=0, jobs=1): err= 0: pid=126578: Mon Oct 7 07:50:57 2024 00:31:53.486 read: IOPS=566, BW=2268KiB/s (2322kB/s)(22.2MiB/10018msec) 00:31:53.486 slat (usec): min=7, max=135, avg=28.78, stdev=16.57 00:31:53.486 clat (usec): min=6448, max=41579, avg=28001.74, stdev=1922.92 00:31:53.486 lat (usec): min=6464, max=41596, avg=28030.52, stdev=1923.77 00:31:53.486 clat percentiles (usec): 00:31:53.486 | 1.00th=[19006], 5.00th=[26608], 10.00th=[27395], 20.00th=[27657], 00:31:53.486 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:31:53.486 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:31:53.486 | 99.00th=[30540], 99.50th=[32113], 99.90th=[41681], 99.95th=[41681], 00:31:53.486 | 99.99th=[41681] 00:31:53.486 bw ( KiB/s): min= 2176, max= 2432, per=4.20%, avg=2265.60, stdev=73.12, samples=20 00:31:53.486 iops : min= 544, max= 608, avg=566.40, stdev=18.28, samples=20 00:31:53.486 lat (msec) : 10=0.28%, 20=0.85%, 50=98.87% 00:31:53.486 cpu : usr=98.75%, sys=0.84%, ctx=13, majf=0, minf=54 00:31:53.486 IO depths : 1=5.6%, 2=11.6%, 4=24.4%, 8=51.5%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:53.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.486 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.486 filename1: (groupid=0, jobs=1): err= 0: pid=126579: Mon Oct 7 07:50:57 2024 00:31:53.486 read: IOPS=537, BW=2152KiB/s (2203kB/s)(21.0MiB/10006msec) 00:31:53.486 slat (usec): min=6, max=119, avg=21.52, stdev=15.32 00:31:53.486 clat (usec): min=8154, max=64449, avg=29632.23, stdev=4673.74 00:31:53.486 lat (usec): min=8191, max=64466, avg=29653.75, stdev=4673.17 00:31:53.486 clat percentiles (usec): 00:31:53.486 | 1.00th=[19530], 5.00th=[27395], 10.00th=[27919], 20.00th=[28181], 00:31:53.486 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:31:53.486 | 70.00th=[28705], 80.00th=[28967], 90.00th=[33817], 95.00th=[38536], 00:31:53.486 | 99.00th=[48497], 99.50th=[51119], 99.90th=[64226], 99.95th=[64226], 00:31:53.486 | 99.99th=[64226] 00:31:53.486 bw ( KiB/s): min= 1664, max= 2304, per=3.96%, avg=2136.63, stdev=189.82, samples=19 00:31:53.486 iops : min= 416, max= 576, avg=534.16, stdev=47.46, samples=19 00:31:53.486 lat (msec) : 10=0.37%, 20=0.67%, 50=98.29%, 100=0.67% 00:31:53.486 cpu : usr=98.66%, sys=0.90%, ctx=17, majf=0, minf=56 00:31:53.486 IO depths : 1=0.2%, 2=1.2%, 4=6.9%, 8=75.2%, 16=16.6%, 32=0.0%, >=64=0.0% 00:31:53.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 complete : 0=0.0%, 4=90.9%, 8=7.4%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 issued rwts: total=5382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.486 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.486 filename1: (groupid=0, jobs=1): err= 0: pid=126580: Mon Oct 7 07:50:57 2024 00:31:53.486 read: IOPS=567, BW=2270KiB/s (2324kB/s)(22.2MiB/10020msec) 00:31:53.486 slat (nsec): min=7271, max=96514, avg=21825.01, stdev=11787.60 00:31:53.486 clat (usec): min=5535, max=45077, avg=28024.75, stdev=1889.01 00:31:53.486 lat (usec): min=5546, max=45103, avg=28046.58, stdev=1889.94 00:31:53.486 clat percentiles (usec): 00:31:53.486 | 1.00th=[16188], 5.00th=[26608], 10.00th=[27132], 20.00th=[27919], 00:31:53.486 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:31:53.486 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:31:53.486 | 99.00th=[31327], 99.50th=[32113], 99.90th=[42206], 99.95th=[42206], 00:31:53.486 | 99.99th=[44827] 00:31:53.486 bw ( KiB/s): min= 2176, max= 2480, per=4.21%, avg=2268.00, stdev=79.39, samples=20 00:31:53.486 iops : min= 544, max= 620, avg=567.00, stdev=19.85, samples=20 00:31:53.486 lat (msec) : 10=0.11%, 20=1.16%, 50=98.73% 00:31:53.486 cpu : usr=98.52%, sys=1.05%, ctx=18, majf=0, minf=65 00:31:53.486 IO depths : 1=5.1%, 2=10.8%, 4=24.0%, 8=52.7%, 16=7.4%, 32=0.0%, >=64=0.0% 00:31:53.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 issued rwts: total=5686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.486 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.486 filename1: (groupid=0, jobs=1): err= 0: pid=126581: Mon Oct 7 07:50:57 2024 00:31:53.486 read: IOPS=567, BW=2272KiB/s (2326kB/s)(22.2MiB/10008msec) 00:31:53.486 slat (usec): min=5, max=117, avg=40.81, stdev=20.85 00:31:53.486 clat (usec): min=13010, max=62851, avg=27846.60, stdev=2641.35 00:31:53.486 lat (usec): min=13019, max=62870, avg=27887.41, stdev=2642.91 00:31:53.486 clat percentiles (usec): 00:31:53.486 | 1.00th=[16188], 5.00th=[26346], 10.00th=[27132], 20.00th=[27657], 00:31:53.486 | 30.00th=[27657], 40.00th=[27919], 50.00th=[28181], 60.00th=[28181], 00:31:53.486 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:31:53.486 | 99.00th=[32375], 99.50th=[36439], 99.90th=[59507], 99.95th=[62653], 00:31:53.486 | 99.99th=[62653] 00:31:53.486 bw ( KiB/s): min= 2096, max= 2496, per=4.21%, avg=2272.00, stdev=102.73, samples=19 00:31:53.486 iops : min= 524, max= 624, avg=568.00, stdev=25.68, samples=19 00:31:53.486 lat (msec) : 20=1.76%, 50=97.96%, 100=0.28% 00:31:53.486 cpu : usr=98.74%, sys=0.85%, ctx=15, majf=0, minf=57 00:31:53.486 IO depths : 1=5.1%, 2=10.4%, 4=22.2%, 8=54.4%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:53.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 complete : 0=0.0%, 4=93.5%, 8=1.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.486 issued rwts: total=5684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.486 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.486 filename1: (groupid=0, jobs=1): err= 0: pid=126582: Mon Oct 7 07:50:57 2024 00:31:53.487 read: IOPS=563, BW=2253KiB/s (2307kB/s)(22.0MiB/10008msec) 00:31:53.487 slat (usec): min=4, max=119, avg=33.07, stdev=21.09 00:31:53.487 clat (usec): min=8803, max=52253, avg=28147.75, stdev=2464.76 00:31:53.487 lat (usec): min=8812, max=52266, avg=28180.82, stdev=2463.89 00:31:53.487 clat percentiles (usec): 00:31:53.487 | 1.00th=[20841], 5.00th=[26608], 10.00th=[27132], 20.00th=[27657], 00:31:53.487 | 30.00th=[27919], 40.00th=[27919], 50.00th=[28181], 60.00th=[28443], 00:31:53.487 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28967], 95.00th=[29230], 00:31:53.487 | 99.00th=[36963], 99.50th=[45351], 99.90th=[52167], 99.95th=[52167], 00:31:53.487 | 99.99th=[52167] 00:31:53.487 bw ( KiB/s): min= 2052, max= 2304, per=4.16%, avg=2244.42, stdev=72.11, samples=19 00:31:53.487 iops : min= 513, max= 576, avg=561.11, stdev=18.03, samples=19 00:31:53.487 lat (msec) : 10=0.28%, 20=0.43%, 50=99.01%, 100=0.28% 00:31:53.487 cpu : usr=98.76%, sys=0.83%, ctx=13, majf=0, minf=56 00:31:53.487 IO depths : 1=2.5%, 2=6.3%, 4=15.7%, 8=63.3%, 16=12.3%, 32=0.0%, >=64=0.0% 00:31:53.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.487 complete : 0=0.0%, 4=92.4%, 8=4.1%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.487 issued rwts: total=5638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.487 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.487 filename1: (groupid=0, jobs=1): err= 0: pid=126583: Mon Oct 7 07:50:57 2024 00:31:53.487 read: IOPS=556, BW=2226KiB/s (2279kB/s)(21.8MiB/10006msec) 00:31:53.487 slat (usec): min=4, max=132, avg=41.23, stdev=19.77 00:31:53.487 clat (usec): min=4914, max=64913, avg=28400.04, stdev=3985.87 00:31:53.487 lat (usec): min=4922, max=64927, avg=28441.26, stdev=3983.37 00:31:53.487 clat percentiles (usec): 00:31:53.487 | 1.00th=[10290], 5.00th=[26870], 10.00th=[27395], 20.00th=[27657], 00:31:53.487 | 30.00th=[27919], 40.00th=[27919], 50.00th=[28181], 60.00th=[28181], 00:31:53.487 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28967], 95.00th=[31851], 00:31:53.487 | 99.00th=[44827], 99.50th=[47973], 99.90th=[50594], 99.95th=[64750], 00:31:53.487 | 99.99th=[64750] 00:31:53.487 bw ( KiB/s): min= 1920, max= 2376, per=4.11%, avg=2216.42, stdev=115.82, samples=19 00:31:53.487 iops : min= 480, max= 594, avg=554.11, stdev=28.96, samples=19 00:31:53.487 lat (msec) : 10=0.84%, 20=0.63%, 50=98.24%, 100=0.29% 00:31:53.487 cpu : usr=98.73%, sys=0.87%, ctx=11, majf=0, minf=40 00:31:53.487 IO depths : 1=4.5%, 2=9.8%, 4=23.5%, 8=54.1%, 16=8.1%, 32=0.0%, >=64=0.0% 00:31:53.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.487 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.487 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.487 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.487 filename1: (groupid=0, jobs=1): err= 0: pid=126584: Mon Oct 7 07:50:57 2024 00:31:53.487 read: IOPS=562, BW=2252KiB/s (2306kB/s)(22.0MiB/10004msec) 00:31:53.487 slat (usec): min=6, max=118, avg=43.83, stdev=19.09 00:31:53.487 clat (usec): min=13601, max=58362, avg=28065.15, stdev=1949.26 00:31:53.487 lat (usec): min=13626, max=58379, avg=28108.98, stdev=1946.85 00:31:53.487 clat percentiles (usec): 00:31:53.487 | 1.00th=[25560], 5.00th=[26608], 10.00th=[27132], 20.00th=[27657], 00:31:53.487 | 30.00th=[27919], 40.00th=[27919], 50.00th=[28181], 60.00th=[28181], 00:31:53.487 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:31:53.487 | 99.00th=[29754], 99.50th=[36963], 99.90th=[58459], 99.95th=[58459], 00:31:53.487 | 99.99th=[58459] 00:31:53.487 bw ( KiB/s): min= 2036, max= 2320, per=4.17%, avg=2250.32, stdev=80.15, samples=19 00:31:53.487 iops : min= 509, max= 580, avg=562.58, stdev=20.04, samples=19 00:31:53.487 lat (msec) : 20=0.28%, 50=99.43%, 100=0.28% 00:31:53.487 cpu : usr=98.62%, sys=0.97%, ctx=13, majf=0, minf=65 00:31:53.487 IO depths : 1=5.8%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:53.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.487 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.487 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.487 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.487 filename1: (groupid=0, jobs=1): err= 0: pid=126585: Mon Oct 7 07:50:57 2024 00:31:53.487 read: IOPS=565, BW=2264KiB/s (2318kB/s)(22.1MiB/10001msec) 00:31:53.487 slat (usec): min=6, max=146, avg=39.92, stdev=25.45 00:31:53.487 clat (usec): min=7606, max=47887, avg=27895.29, stdev=1905.80 00:31:53.487 lat (usec): min=7621, max=47894, avg=27935.21, stdev=1908.29 00:31:53.487 clat percentiles (usec): 00:31:53.487 | 1.00th=[20579], 5.00th=[26608], 10.00th=[27132], 20.00th=[27657], 00:31:53.487 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:31:53.487 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:31:53.487 | 99.00th=[31589], 99.50th=[39060], 99.90th=[47973], 99.95th=[47973], 00:31:53.487 | 99.99th=[47973] 00:31:53.487 bw ( KiB/s): min= 2176, max= 2352, per=4.19%, avg=2261.89, stdev=63.82, samples=19 00:31:53.487 iops : min= 544, max= 588, avg=565.47, stdev=15.96, samples=19 00:31:53.487 lat (msec) : 10=0.11%, 20=0.67%, 50=99.22% 00:31:53.487 cpu : usr=98.56%, sys=0.88%, ctx=115, majf=0, minf=59 00:31:53.487 IO depths : 1=5.5%, 2=11.2%, 4=23.5%, 8=52.6%, 16=7.2%, 32=0.0%, >=64=0.0% 00:31:53.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.487 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.487 issued rwts: total=5660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.487 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.487 filename2: (groupid=0, jobs=1): err= 0: pid=126586: Mon Oct 7 07:50:57 2024 00:31:53.487 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.3MiB/10021msec) 00:31:53.487 slat (usec): min=6, max=129, avg=28.46, stdev=23.37 00:31:53.487 clat (usec): min=5702, max=51741, avg=27916.86, stdev=5574.52 00:31:53.487 lat (usec): min=5737, max=51749, avg=27945.32, stdev=5575.06 00:31:53.487 clat percentiles (usec): 00:31:53.487 | 1.00th=[15664], 5.00th=[17171], 10.00th=[21365], 20.00th=[26608], 00:31:53.487 | 30.00th=[27395], 40.00th=[27919], 50.00th=[28181], 60.00th=[28443], 00:31:53.487 | 70.00th=[28443], 80.00th=[28967], 90.00th=[31327], 95.00th=[35914], 00:31:53.487 | 99.00th=[49546], 99.50th=[49546], 99.90th=[51119], 99.95th=[51643], 00:31:53.487 | 99.99th=[51643] 00:31:53.487 bw ( KiB/s): min= 1792, max= 2544, per=4.22%, avg=2276.00, stdev=168.68, samples=20 00:31:53.487 iops : min= 448, max= 636, avg=569.00, stdev=42.17, samples=20 00:31:53.487 lat (msec) : 10=0.35%, 20=7.66%, 50=91.75%, 100=0.25% 00:31:53.487 cpu : usr=98.89%, sys=0.63%, ctx=63, majf=0, minf=53 00:31:53.487 IO depths : 1=1.7%, 2=3.4%, 4=10.8%, 8=72.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:31:53.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.487 complete : 0=0.0%, 4=90.6%, 8=5.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.487 issued rwts: total=5706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.487 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.487 filename2: (groupid=0, jobs=1): err= 0: pid=126587: Mon Oct 7 07:50:57 2024 00:31:53.487 read: IOPS=563, BW=2255KiB/s (2309kB/s)(22.1MiB/10020msec) 00:31:53.487 slat (usec): min=7, max=117, avg=42.12, stdev=19.16 00:31:53.487 clat (usec): min=20428, max=42931, avg=28060.48, stdev=1050.59 00:31:53.487 lat (usec): min=20476, max=42947, avg=28102.60, stdev=1047.51 00:31:53.487 clat percentiles (usec): 00:31:53.487 | 1.00th=[26084], 5.00th=[26870], 10.00th=[27395], 20.00th=[27657], 00:31:53.487 | 30.00th=[27919], 40.00th=[27919], 50.00th=[28181], 60.00th=[28181], 00:31:53.487 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:31:53.487 | 99.00th=[29754], 99.50th=[30540], 99.90th=[42206], 99.95th=[42730], 00:31:53.487 | 99.99th=[42730] 00:31:53.487 bw ( KiB/s): min= 2176, max= 2304, per=4.18%, avg=2252.80, stdev=64.34, samples=20 00:31:53.487 iops : min= 544, max= 576, avg=563.20, stdev=16.08, samples=20 00:31:53.487 lat (msec) : 50=100.00% 00:31:53.487 cpu : usr=98.78%, sys=0.81%, ctx=13, majf=0, minf=62 00:31:53.487 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:53.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.487 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.487 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.487 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.487 filename2: (groupid=0, jobs=1): err= 0: pid=126588: Mon Oct 7 07:50:57 2024 00:31:53.487 read: IOPS=568, BW=2276KiB/s (2330kB/s)(22.3MiB/10023msec) 00:31:53.487 slat (usec): min=3, max=116, avg=24.60, stdev=18.69 00:31:53.487 clat (usec): min=3992, max=41511, avg=27911.03, stdev=2516.64 00:31:53.487 lat (usec): min=4001, max=41526, avg=27935.64, stdev=2517.56 00:31:53.487 clat percentiles (usec): 00:31:53.487 | 1.00th=[13698], 5.00th=[26608], 10.00th=[27395], 20.00th=[27657], 00:31:53.487 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:31:53.487 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:31:53.487 | 99.00th=[30016], 99.50th=[40109], 99.90th=[41681], 99.95th=[41681], 00:31:53.487 | 99.99th=[41681] 00:31:53.487 bw ( KiB/s): min= 2176, max= 2432, per=4.22%, avg=2274.40, stdev=75.63, samples=20 00:31:53.487 iops : min= 544, max= 608, avg=568.60, stdev=18.91, samples=20 00:31:53.487 lat (msec) : 4=0.02%, 10=0.67%, 20=0.98%, 50=98.33% 00:31:53.487 cpu : usr=98.43%, sys=1.10%, ctx=41, majf=0, minf=62 00:31:53.487 IO depths : 1=5.8%, 2=11.8%, 4=24.2%, 8=51.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:53.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.487 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.487 issued rwts: total=5702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.487 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.487 filename2: (groupid=0, jobs=1): err= 0: pid=126589: Mon Oct 7 07:50:57 2024 00:31:53.487 read: IOPS=566, BW=2267KiB/s (2322kB/s)(22.2MiB/10020msec) 00:31:53.487 slat (usec): min=4, max=129, avg=43.24, stdev=24.16 00:31:53.487 clat (usec): min=5943, max=41642, avg=27831.20, stdev=2145.60 00:31:53.487 lat (usec): min=5967, max=41654, avg=27874.43, stdev=2148.21 00:31:53.487 clat percentiles (usec): 00:31:53.487 | 1.00th=[25560], 5.00th=[26870], 10.00th=[27395], 20.00th=[27657], 00:31:53.487 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:31:53.487 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:31:53.487 | 99.00th=[30016], 99.50th=[31589], 99.90th=[41681], 99.95th=[41681], 00:31:53.487 | 99.99th=[41681] 00:31:53.487 bw ( KiB/s): min= 2176, max= 2432, per=4.20%, avg=2265.60, stdev=73.12, samples=20 00:31:53.487 iops : min= 544, max= 608, avg=566.40, stdev=18.28, samples=20 00:31:53.487 lat (msec) : 10=0.85%, 50=99.15% 00:31:53.487 cpu : usr=98.59%, sys=0.95%, ctx=58, majf=0, minf=51 00:31:53.487 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:53.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.488 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.488 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.488 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.488 filename2: (groupid=0, jobs=1): err= 0: pid=126590: Mon Oct 7 07:50:57 2024 00:31:53.488 read: IOPS=562, BW=2252KiB/s (2306kB/s)(22.0MiB/10004msec) 00:31:53.488 slat (usec): min=6, max=115, avg=28.70, stdev=17.80 00:31:53.488 clat (usec): min=13358, max=58488, avg=28167.76, stdev=1954.32 00:31:53.488 lat (usec): min=13368, max=58505, avg=28196.46, stdev=1952.19 00:31:53.488 clat percentiles (usec): 00:31:53.488 | 1.00th=[25822], 5.00th=[26870], 10.00th=[27395], 20.00th=[27657], 00:31:53.488 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:31:53.488 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:31:53.488 | 99.00th=[30540], 99.50th=[32637], 99.90th=[58459], 99.95th=[58459], 00:31:53.488 | 99.99th=[58459] 00:31:53.488 bw ( KiB/s): min= 2032, max= 2320, per=4.17%, avg=2250.11, stdev=80.75, samples=19 00:31:53.488 iops : min= 508, max= 580, avg=562.53, stdev=20.19, samples=19 00:31:53.488 lat (msec) : 20=0.18%, 50=99.54%, 100=0.28% 00:31:53.488 cpu : usr=98.74%, sys=0.85%, ctx=13, majf=0, minf=48 00:31:53.488 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:53.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.488 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.488 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.488 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.488 filename2: (groupid=0, jobs=1): err= 0: pid=126591: Mon Oct 7 07:50:57 2024 00:31:53.488 read: IOPS=566, BW=2266KiB/s (2321kB/s)(22.2MiB/10025msec) 00:31:53.488 slat (usec): min=4, max=112, avg=19.12, stdev=15.97 00:31:53.488 clat (usec): min=6091, max=41291, avg=28074.51, stdev=2020.47 00:31:53.488 lat (usec): min=6112, max=41322, avg=28093.62, stdev=2020.60 00:31:53.488 clat percentiles (usec): 00:31:53.488 | 1.00th=[24249], 5.00th=[27132], 10.00th=[27395], 20.00th=[27919], 00:31:53.488 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:31:53.488 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:31:53.488 | 99.00th=[30278], 99.50th=[30802], 99.90th=[41157], 99.95th=[41157], 00:31:53.488 | 99.99th=[41157] 00:31:53.488 bw ( KiB/s): min= 2176, max= 2432, per=4.20%, avg=2265.60, stdev=73.12, samples=20 00:31:53.488 iops : min= 544, max= 608, avg=566.40, stdev=18.28, samples=20 00:31:53.488 lat (msec) : 10=0.56%, 20=0.28%, 50=99.15% 00:31:53.488 cpu : usr=98.59%, sys=1.00%, ctx=28, majf=0, minf=74 00:31:53.488 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:53.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.488 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.488 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.488 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.488 filename2: (groupid=0, jobs=1): err= 0: pid=126592: Mon Oct 7 07:50:57 2024 00:31:53.488 read: IOPS=564, BW=2259KiB/s (2313kB/s)(22.1MiB/10020msec) 00:31:53.488 slat (usec): min=4, max=116, avg=35.81, stdev=20.52 00:31:53.488 clat (usec): min=14428, max=45965, avg=28076.54, stdev=1920.68 00:31:53.488 lat (usec): min=14436, max=46027, avg=28112.35, stdev=1920.51 00:31:53.488 clat percentiles (usec): 00:31:53.488 | 1.00th=[19792], 5.00th=[26608], 10.00th=[27132], 20.00th=[27657], 00:31:53.488 | 30.00th=[27919], 40.00th=[27919], 50.00th=[28181], 60.00th=[28181], 00:31:53.488 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[29230], 00:31:53.488 | 99.00th=[35914], 99.50th=[38536], 99.90th=[44827], 99.95th=[45351], 00:31:53.488 | 99.99th=[45876] 00:31:53.488 bw ( KiB/s): min= 2176, max= 2384, per=4.19%, avg=2256.80, stdev=66.98, samples=20 00:31:53.488 iops : min= 544, max= 596, avg=564.20, stdev=16.74, samples=20 00:31:53.488 lat (msec) : 20=1.10%, 50=98.90% 00:31:53.488 cpu : usr=98.71%, sys=0.87%, ctx=17, majf=0, minf=55 00:31:53.488 IO depths : 1=4.6%, 2=9.8%, 4=22.6%, 8=55.0%, 16=7.9%, 32=0.0%, >=64=0.0% 00:31:53.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.488 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.488 issued rwts: total=5658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.488 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.488 filename2: (groupid=0, jobs=1): err= 0: pid=126593: Mon Oct 7 07:50:57 2024 00:31:53.488 read: IOPS=564, BW=2257KiB/s (2311kB/s)(22.1MiB/10005msec) 00:31:53.488 slat (usec): min=6, max=119, avg=44.11, stdev=21.08 00:31:53.488 clat (usec): min=7336, max=50843, avg=27983.43, stdev=2452.75 00:31:53.488 lat (usec): min=7369, max=50858, avg=28027.54, stdev=2454.01 00:31:53.488 clat percentiles (usec): 00:31:53.488 | 1.00th=[19268], 5.00th=[26608], 10.00th=[27132], 20.00th=[27657], 00:31:53.488 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:31:53.488 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:31:53.488 | 99.00th=[36439], 99.50th=[44827], 99.90th=[49546], 99.95th=[49546], 00:31:53.488 | 99.99th=[50594] 00:31:53.488 bw ( KiB/s): min= 2052, max= 2304, per=4.17%, avg=2249.47, stdev=76.77, samples=19 00:31:53.488 iops : min= 513, max= 576, avg=562.37, stdev=19.19, samples=19 00:31:53.488 lat (msec) : 10=0.35%, 20=0.87%, 50=98.74%, 100=0.04% 00:31:53.488 cpu : usr=98.70%, sys=0.85%, ctx=46, majf=0, minf=49 00:31:53.488 IO depths : 1=4.6%, 2=9.9%, 4=21.6%, 8=55.2%, 16=8.6%, 32=0.0%, >=64=0.0% 00:31:53.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.488 complete : 0=0.0%, 4=93.5%, 8=1.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.488 issued rwts: total=5646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.488 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.488 00:31:53.488 Run status group 0 (all jobs): 00:31:53.488 READ: bw=52.6MiB/s (55.2MB/s), 2110KiB/s-2278KiB/s (2160kB/s-2332kB/s), io=528MiB (553MB), run=10001-10025msec 00:31:54.052 07:50:57 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:54.052 07:50:57 -- target/dif.sh@43 -- # local sub 00:31:54.052 07:50:57 -- target/dif.sh@45 -- # for sub in "$@" 00:31:54.052 07:50:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:54.052 07:50:57 -- target/dif.sh@36 -- # local sub_id=0 00:31:54.052 07:50:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:54.052 07:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.052 07:50:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.052 07:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.052 07:50:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:54.052 07:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.052 07:50:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.052 07:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.052 07:50:57 -- target/dif.sh@45 -- # for sub in "$@" 00:31:54.052 07:50:57 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:54.052 07:50:57 -- target/dif.sh@36 -- # local sub_id=1 00:31:54.052 07:50:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:54.052 07:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.052 07:50:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.052 07:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.052 07:50:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:54.052 07:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.052 07:50:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.052 07:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.052 07:50:57 -- target/dif.sh@45 -- # for sub in "$@" 00:31:54.052 07:50:57 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:54.052 07:50:57 -- target/dif.sh@36 -- # local sub_id=2 00:31:54.052 07:50:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:54.052 07:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.052 07:50:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.052 07:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.052 07:50:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:54.052 07:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.052 07:50:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.052 07:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.052 07:50:57 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:54.052 07:50:57 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:54.052 07:50:57 -- target/dif.sh@115 -- # numjobs=2 00:31:54.052 07:50:57 -- target/dif.sh@115 -- # iodepth=8 00:31:54.052 07:50:57 -- target/dif.sh@115 -- # runtime=5 00:31:54.052 07:50:57 -- target/dif.sh@115 -- # files=1 00:31:54.052 07:50:57 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:54.052 07:50:57 -- target/dif.sh@28 -- # local sub 00:31:54.052 07:50:57 -- target/dif.sh@30 -- # for sub in "$@" 00:31:54.052 07:50:57 -- target/dif.sh@31 -- # create_subsystem 0 00:31:54.052 07:50:57 -- target/dif.sh@18 -- # local sub_id=0 00:31:54.052 07:50:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:54.052 07:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.052 07:50:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.052 bdev_null0 00:31:54.052 07:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.052 07:50:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:54.052 07:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.052 07:50:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.052 07:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.052 07:50:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:54.052 07:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.052 07:50:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.052 07:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.052 07:50:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:54.052 07:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.052 07:50:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.052 [2024-10-07 07:50:57.805895] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.052 07:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.052 07:50:57 -- target/dif.sh@30 -- # for sub in "$@" 00:31:54.052 07:50:57 -- target/dif.sh@31 -- # create_subsystem 1 00:31:54.052 07:50:57 -- target/dif.sh@18 -- # local sub_id=1 00:31:54.052 07:50:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:54.052 07:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.052 07:50:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.052 bdev_null1 00:31:54.052 07:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.052 07:50:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:54.052 07:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.052 07:50:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.052 07:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.052 07:50:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:54.052 07:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.052 07:50:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.052 07:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.052 07:50:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:54.052 07:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.052 07:50:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.052 07:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.052 07:50:57 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:54.052 07:50:57 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:54.052 07:50:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:54.052 07:50:57 -- nvmf/common.sh@520 -- # config=() 00:31:54.052 07:50:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:54.052 07:50:57 -- nvmf/common.sh@520 -- # local subsystem config 00:31:54.052 07:50:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:54.052 07:50:57 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:54.052 07:50:57 -- target/dif.sh@82 -- # gen_fio_conf 00:31:54.052 07:50:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:54.052 { 00:31:54.052 "params": { 00:31:54.052 "name": "Nvme$subsystem", 00:31:54.052 "trtype": "$TEST_TRANSPORT", 00:31:54.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:54.052 "adrfam": "ipv4", 00:31:54.052 "trsvcid": "$NVMF_PORT", 00:31:54.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:54.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:54.052 "hdgst": ${hdgst:-false}, 00:31:54.052 "ddgst": ${ddgst:-false} 00:31:54.052 }, 00:31:54.052 "method": "bdev_nvme_attach_controller" 00:31:54.052 } 00:31:54.052 EOF 00:31:54.052 )") 00:31:54.052 07:50:57 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:54.052 07:50:57 -- target/dif.sh@54 -- # local file 00:31:54.052 07:50:57 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:54.052 07:50:57 -- target/dif.sh@56 -- # cat 00:31:54.052 07:50:57 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:54.052 07:50:57 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:54.052 07:50:57 -- common/autotest_common.sh@1320 -- # shift 00:31:54.052 07:50:57 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:54.053 07:50:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:54.053 07:50:57 -- nvmf/common.sh@542 -- # cat 00:31:54.053 07:50:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:54.053 07:50:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:54.053 07:50:57 -- target/dif.sh@72 -- # (( file <= files )) 00:31:54.053 07:50:57 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:54.053 07:50:57 -- target/dif.sh@73 -- # cat 00:31:54.053 07:50:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:54.053 07:50:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:54.053 07:50:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:54.053 { 00:31:54.053 "params": { 00:31:54.053 "name": "Nvme$subsystem", 00:31:54.053 "trtype": "$TEST_TRANSPORT", 00:31:54.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:54.053 "adrfam": "ipv4", 00:31:54.053 "trsvcid": "$NVMF_PORT", 00:31:54.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:54.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:54.053 "hdgst": ${hdgst:-false}, 00:31:54.053 "ddgst": ${ddgst:-false} 00:31:54.053 }, 00:31:54.053 "method": "bdev_nvme_attach_controller" 00:31:54.053 } 00:31:54.053 EOF 00:31:54.053 )") 00:31:54.053 07:50:57 -- target/dif.sh@72 -- # (( file++ )) 00:31:54.053 07:50:57 -- target/dif.sh@72 -- # (( file <= files )) 00:31:54.053 07:50:57 -- nvmf/common.sh@542 -- # cat 00:31:54.053 07:50:57 -- nvmf/common.sh@544 -- # jq . 00:31:54.053 07:50:57 -- nvmf/common.sh@545 -- # IFS=, 00:31:54.053 07:50:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:54.053 "params": { 00:31:54.053 "name": "Nvme0", 00:31:54.053 "trtype": "tcp", 00:31:54.053 "traddr": "10.0.0.2", 00:31:54.053 "adrfam": "ipv4", 00:31:54.053 "trsvcid": "4420", 00:31:54.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:54.053 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:54.053 "hdgst": false, 00:31:54.053 "ddgst": false 00:31:54.053 }, 00:31:54.053 "method": "bdev_nvme_attach_controller" 00:31:54.053 },{ 00:31:54.053 "params": { 00:31:54.053 "name": "Nvme1", 00:31:54.053 "trtype": "tcp", 00:31:54.053 "traddr": "10.0.0.2", 00:31:54.053 "adrfam": "ipv4", 00:31:54.053 "trsvcid": "4420", 00:31:54.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:54.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:54.053 "hdgst": false, 00:31:54.053 "ddgst": false 00:31:54.053 }, 00:31:54.053 "method": "bdev_nvme_attach_controller" 00:31:54.053 }' 00:31:54.053 07:50:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:54.053 07:50:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:54.053 07:50:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:54.053 07:50:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:54.053 07:50:57 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:54.053 07:50:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:54.053 07:50:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:54.053 07:50:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:54.053 07:50:57 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:54.053 07:50:57 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:54.311 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:54.311 ... 00:31:54.311 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:54.311 ... 00:31:54.311 fio-3.35 00:31:54.311 Starting 4 threads 00:31:54.311 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.877 [2024-10-07 07:50:58.759192] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:54.877 [2024-10-07 07:50:58.759246] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:00.139 00:32:00.139 filename0: (groupid=0, jobs=1): err= 0: pid=128522: Mon Oct 7 07:51:03 2024 00:32:00.139 read: IOPS=2679, BW=20.9MiB/s (21.9MB/s)(105MiB/5002msec) 00:32:00.139 slat (nsec): min=5955, max=33539, avg=9341.53, stdev=3086.06 00:32:00.139 clat (usec): min=1207, max=45200, avg=2959.94, stdev=1146.01 00:32:00.139 lat (usec): min=1214, max=45230, avg=2969.28, stdev=1146.12 00:32:00.139 clat percentiles (usec): 00:32:00.139 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2606], 00:32:00.139 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2868], 00:32:00.139 | 70.00th=[ 2999], 80.00th=[ 3195], 90.00th=[ 3818], 95.00th=[ 4047], 00:32:00.139 | 99.00th=[ 4424], 99.50th=[ 4621], 99.90th=[ 5080], 99.95th=[45351], 00:32:00.139 | 99.99th=[45351] 00:32:00.139 bw ( KiB/s): min=20032, max=22768, per=24.68%, avg=21418.67, stdev=1069.97, samples=9 00:32:00.139 iops : min= 2502, max= 2846, avg=2677.11, stdev=134.07, samples=9 00:32:00.139 lat (msec) : 2=0.69%, 4=92.93%, 10=6.32%, 50=0.06% 00:32:00.139 cpu : usr=96.32%, sys=3.30%, ctx=12, majf=0, minf=0 00:32:00.139 IO depths : 1=0.1%, 2=1.4%, 4=69.7%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:00.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.139 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.139 issued rwts: total=13401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:00.139 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:00.139 filename0: (groupid=0, jobs=1): err= 0: pid=128523: Mon Oct 7 07:51:03 2024 00:32:00.139 read: IOPS=2760, BW=21.6MiB/s (22.6MB/s)(108MiB/5002msec) 00:32:00.139 slat (nsec): min=5993, max=32754, avg=9129.70, stdev=3063.09 00:32:00.139 clat (usec): min=1293, max=5532, avg=2871.30, stdev=460.83 00:32:00.139 lat (usec): min=1304, max=5539, avg=2880.43, stdev=460.51 00:32:00.139 clat percentiles (usec): 00:32:00.139 | 1.00th=[ 1713], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2606], 00:32:00.139 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2802], 60.00th=[ 2835], 00:32:00.139 | 70.00th=[ 2933], 80.00th=[ 3064], 90.00th=[ 3359], 95.00th=[ 3851], 00:32:00.139 | 99.00th=[ 4490], 99.50th=[ 4817], 99.90th=[ 5145], 99.95th=[ 5342], 00:32:00.139 | 99.99th=[ 5538] 00:32:00.139 bw ( KiB/s): min=20992, max=23248, per=25.45%, avg=22087.60, stdev=636.22, samples=10 00:32:00.139 iops : min= 2624, max= 2906, avg=2760.90, stdev=79.51, samples=10 00:32:00.139 lat (msec) : 2=1.69%, 4=94.57%, 10=3.74% 00:32:00.139 cpu : usr=96.28%, sys=3.30%, ctx=11, majf=0, minf=9 00:32:00.139 IO depths : 1=0.1%, 2=1.7%, 4=71.2%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:00.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.139 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.139 issued rwts: total=13810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:00.139 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:00.139 filename1: (groupid=0, jobs=1): err= 0: pid=128524: Mon Oct 7 07:51:03 2024 00:32:00.139 read: IOPS=2738, BW=21.4MiB/s (22.4MB/s)(108MiB/5042msec) 00:32:00.139 slat (nsec): min=5527, max=35800, avg=9360.14, stdev=3115.39 00:32:00.139 clat (usec): min=1352, max=46253, avg=2890.73, stdev=1382.03 00:32:00.139 lat (usec): min=1362, max=46277, avg=2900.09, stdev=1381.97 00:32:00.139 clat percentiles (usec): 00:32:00.139 | 1.00th=[ 2073], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2606], 00:32:00.139 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2802], 60.00th=[ 2835], 00:32:00.139 | 70.00th=[ 2900], 80.00th=[ 3032], 90.00th=[ 3195], 95.00th=[ 3589], 00:32:00.139 | 99.00th=[ 4424], 99.50th=[ 4752], 99.90th=[41157], 99.95th=[46400], 00:32:00.139 | 99.99th=[46400] 00:32:00.139 bw ( KiB/s): min=20544, max=24032, per=25.44%, avg=22085.70, stdev=1127.25, samples=10 00:32:00.139 iops : min= 2568, max= 3004, avg=2760.70, stdev=140.92, samples=10 00:32:00.139 lat (msec) : 2=0.70%, 4=96.32%, 10=2.88%, 50=0.10% 00:32:00.139 cpu : usr=96.37%, sys=3.27%, ctx=11, majf=0, minf=11 00:32:00.139 IO depths : 1=0.1%, 2=1.8%, 4=66.3%, 8=31.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:00.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.139 complete : 0=0.0%, 4=95.9%, 8=4.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.139 issued rwts: total=13809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:00.139 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:00.139 filename1: (groupid=0, jobs=1): err= 0: pid=128525: Mon Oct 7 07:51:03 2024 00:32:00.139 read: IOPS=2735, BW=21.4MiB/s (22.4MB/s)(107MiB/5002msec) 00:32:00.139 slat (nsec): min=5952, max=35914, avg=9610.34, stdev=3555.77 00:32:00.139 clat (usec): min=1431, max=5784, avg=2896.90, stdev=500.07 00:32:00.139 lat (usec): min=1445, max=5792, avg=2906.51, stdev=499.53 00:32:00.139 clat percentiles (usec): 00:32:00.139 | 1.00th=[ 1713], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2606], 00:32:00.139 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2802], 60.00th=[ 2835], 00:32:00.139 | 70.00th=[ 2966], 80.00th=[ 3097], 90.00th=[ 3523], 95.00th=[ 4113], 00:32:00.139 | 99.00th=[ 4555], 99.50th=[ 4752], 99.90th=[ 5080], 99.95th=[ 5145], 00:32:00.139 | 99.99th=[ 5800] 00:32:00.139 bw ( KiB/s): min=20240, max=24080, per=25.38%, avg=22026.67, stdev=1229.04, samples=9 00:32:00.139 iops : min= 2530, max= 3010, avg=2753.33, stdev=153.63, samples=9 00:32:00.139 lat (msec) : 2=1.69%, 4=92.10%, 10=6.21% 00:32:00.139 cpu : usr=94.04%, sys=4.18%, ctx=296, majf=0, minf=0 00:32:00.139 IO depths : 1=0.1%, 2=2.8%, 4=67.7%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:00.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.139 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.139 issued rwts: total=13685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:00.139 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:00.139 00:32:00.139 Run status group 0 (all jobs): 00:32:00.139 READ: bw=84.8MiB/s (88.9MB/s), 20.9MiB/s-21.6MiB/s (21.9MB/s-22.6MB/s), io=427MiB (448MB), run=5002-5042msec 00:32:00.397 07:51:04 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:00.397 07:51:04 -- target/dif.sh@43 -- # local sub 00:32:00.397 07:51:04 -- target/dif.sh@45 -- # for sub in "$@" 00:32:00.397 07:51:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:00.397 07:51:04 -- target/dif.sh@36 -- # local sub_id=0 00:32:00.397 07:51:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:00.397 07:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.397 07:51:04 -- common/autotest_common.sh@10 -- # set +x 00:32:00.397 07:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.397 07:51:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:00.397 07:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.397 07:51:04 -- common/autotest_common.sh@10 -- # set +x 00:32:00.397 07:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.397 07:51:04 -- target/dif.sh@45 -- # for sub in "$@" 00:32:00.397 07:51:04 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:00.397 07:51:04 -- target/dif.sh@36 -- # local sub_id=1 00:32:00.397 07:51:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:00.397 07:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.397 07:51:04 -- common/autotest_common.sh@10 -- # set +x 00:32:00.397 07:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.397 07:51:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:00.397 07:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.397 07:51:04 -- common/autotest_common.sh@10 -- # set +x 00:32:00.397 07:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.397 00:32:00.397 real 0m24.485s 00:32:00.397 user 4m51.352s 00:32:00.397 sys 0m4.732s 00:32:00.397 07:51:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:00.397 07:51:04 -- common/autotest_common.sh@10 -- # set +x 00:32:00.397 ************************************ 00:32:00.397 END TEST fio_dif_rand_params 00:32:00.397 ************************************ 00:32:00.397 07:51:04 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:00.397 07:51:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:00.397 07:51:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:00.397 07:51:04 -- common/autotest_common.sh@10 -- # set +x 00:32:00.397 ************************************ 00:32:00.397 START TEST fio_dif_digest 00:32:00.397 ************************************ 00:32:00.397 07:51:04 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:32:00.397 07:51:04 -- target/dif.sh@123 -- # local NULL_DIF 00:32:00.397 07:51:04 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:00.397 07:51:04 -- target/dif.sh@125 -- # local hdgst ddgst 00:32:00.397 07:51:04 -- target/dif.sh@127 -- # NULL_DIF=3 00:32:00.397 07:51:04 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:00.397 07:51:04 -- target/dif.sh@127 -- # numjobs=3 00:32:00.397 07:51:04 -- target/dif.sh@127 -- # iodepth=3 00:32:00.397 07:51:04 -- target/dif.sh@127 -- # runtime=10 00:32:00.397 07:51:04 -- target/dif.sh@128 -- # hdgst=true 00:32:00.397 07:51:04 -- target/dif.sh@128 -- # ddgst=true 00:32:00.397 07:51:04 -- target/dif.sh@130 -- # create_subsystems 0 00:32:00.397 07:51:04 -- target/dif.sh@28 -- # local sub 00:32:00.397 07:51:04 -- target/dif.sh@30 -- # for sub in "$@" 00:32:00.397 07:51:04 -- target/dif.sh@31 -- # create_subsystem 0 00:32:00.397 07:51:04 -- target/dif.sh@18 -- # local sub_id=0 00:32:00.397 07:51:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:00.398 07:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.398 07:51:04 -- common/autotest_common.sh@10 -- # set +x 00:32:00.398 bdev_null0 00:32:00.398 07:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.398 07:51:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:00.398 07:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.398 07:51:04 -- common/autotest_common.sh@10 -- # set +x 00:32:00.398 07:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.398 07:51:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:00.398 07:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.398 07:51:04 -- common/autotest_common.sh@10 -- # set +x 00:32:00.398 07:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.398 07:51:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:00.398 07:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.398 07:51:04 -- common/autotest_common.sh@10 -- # set +x 00:32:00.398 [2024-10-07 07:51:04.250703] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.398 07:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.398 07:51:04 -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:00.398 07:51:04 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:00.398 07:51:04 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:00.398 07:51:04 -- nvmf/common.sh@520 -- # config=() 00:32:00.398 07:51:04 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:00.398 07:51:04 -- nvmf/common.sh@520 -- # local subsystem config 00:32:00.398 07:51:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:00.398 07:51:04 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:00.398 07:51:04 -- target/dif.sh@82 -- # gen_fio_conf 00:32:00.398 07:51:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:00.398 { 00:32:00.398 "params": { 00:32:00.398 "name": "Nvme$subsystem", 00:32:00.398 "trtype": "$TEST_TRANSPORT", 00:32:00.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.398 "adrfam": "ipv4", 00:32:00.398 "trsvcid": "$NVMF_PORT", 00:32:00.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.398 "hdgst": ${hdgst:-false}, 00:32:00.398 "ddgst": ${ddgst:-false} 00:32:00.398 }, 00:32:00.398 "method": "bdev_nvme_attach_controller" 00:32:00.398 } 00:32:00.398 EOF 00:32:00.398 )") 00:32:00.398 07:51:04 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:00.398 07:51:04 -- target/dif.sh@54 -- # local file 00:32:00.398 07:51:04 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:00.398 07:51:04 -- target/dif.sh@56 -- # cat 00:32:00.398 07:51:04 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:00.398 07:51:04 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:00.398 07:51:04 -- common/autotest_common.sh@1320 -- # shift 00:32:00.398 07:51:04 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:00.398 07:51:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:00.398 07:51:04 -- nvmf/common.sh@542 -- # cat 00:32:00.398 07:51:04 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:00.398 07:51:04 -- target/dif.sh@72 -- # (( file <= files )) 00:32:00.398 07:51:04 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:00.398 07:51:04 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:00.398 07:51:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:00.398 07:51:04 -- nvmf/common.sh@544 -- # jq . 00:32:00.398 07:51:04 -- nvmf/common.sh@545 -- # IFS=, 00:32:00.398 07:51:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:00.398 "params": { 00:32:00.398 "name": "Nvme0", 00:32:00.398 "trtype": "tcp", 00:32:00.398 "traddr": "10.0.0.2", 00:32:00.398 "adrfam": "ipv4", 00:32:00.398 "trsvcid": "4420", 00:32:00.398 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:00.398 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:00.398 "hdgst": true, 00:32:00.398 "ddgst": true 00:32:00.398 }, 00:32:00.398 "method": "bdev_nvme_attach_controller" 00:32:00.398 }' 00:32:00.398 07:51:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:00.398 07:51:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:00.398 07:51:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:00.398 07:51:04 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:00.398 07:51:04 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:00.398 07:51:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:00.398 07:51:04 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:00.398 07:51:04 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:00.398 07:51:04 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:00.398 07:51:04 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:00.655 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:00.655 ... 00:32:00.655 fio-3.35 00:32:00.655 Starting 3 threads 00:32:00.912 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.169 [2024-10-07 07:51:05.090096] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:01.169 [2024-10-07 07:51:05.090141] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:13.373 00:32:13.373 filename0: (groupid=0, jobs=1): err= 0: pid=129844: Mon Oct 7 07:51:15 2024 00:32:13.373 read: IOPS=289, BW=36.2MiB/s (37.9MB/s)(364MiB/10047msec) 00:32:13.373 slat (nsec): min=6239, max=26419, avg=11155.95, stdev=1747.70 00:32:13.373 clat (usec): min=6265, max=52589, avg=10337.17, stdev=1877.07 00:32:13.373 lat (usec): min=6276, max=52602, avg=10348.32, stdev=1877.08 00:32:13.373 clat percentiles (usec): 00:32:13.373 | 1.00th=[ 7963], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:32:13.373 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:32:13.373 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:32:13.373 | 99.00th=[12256], 99.50th=[12649], 99.90th=[51119], 99.95th=[52167], 00:32:13.373 | 99.99th=[52691] 00:32:13.373 bw ( KiB/s): min=33792, max=38656, per=34.28%, avg=37196.80, stdev=1037.72, samples=20 00:32:13.373 iops : min= 264, max= 302, avg=290.60, stdev= 8.11, samples=20 00:32:13.373 lat (msec) : 10=35.73%, 20=64.10%, 50=0.03%, 100=0.14% 00:32:13.373 cpu : usr=94.01%, sys=5.65%, ctx=24, majf=0, minf=122 00:32:13.373 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.373 issued rwts: total=2908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.373 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:13.373 filename0: (groupid=0, jobs=1): err= 0: pid=129845: Mon Oct 7 07:51:15 2024 00:32:13.373 read: IOPS=275, BW=34.4MiB/s (36.0MB/s)(345MiB/10046msec) 00:32:13.373 slat (nsec): min=6285, max=35114, avg=11233.41, stdev=1671.61 00:32:13.373 clat (usec): min=6702, max=52065, avg=10878.76, stdev=2299.08 00:32:13.373 lat (usec): min=6714, max=52078, avg=10890.00, stdev=2299.10 00:32:13.373 clat percentiles (usec): 00:32:13.373 | 1.00th=[ 8717], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:32:13.373 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:32:13.373 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:32:13.373 | 99.00th=[13173], 99.50th=[13566], 99.90th=[51643], 99.95th=[52167], 00:32:13.373 | 99.99th=[52167] 00:32:13.373 bw ( KiB/s): min=32256, max=36608, per=32.57%, avg=35344.25, stdev=1123.45, samples=20 00:32:13.373 iops : min= 252, max= 286, avg=276.10, stdev= 8.79, samples=20 00:32:13.373 lat (msec) : 10=16.21%, 20=83.50%, 50=0.11%, 100=0.18% 00:32:13.373 cpu : usr=94.33%, sys=5.34%, ctx=21, majf=0, minf=205 00:32:13.373 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.373 issued rwts: total=2763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.373 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:13.373 filename0: (groupid=0, jobs=1): err= 0: pid=129846: Mon Oct 7 07:51:15 2024 00:32:13.374 read: IOPS=284, BW=35.6MiB/s (37.3MB/s)(356MiB/10006msec) 00:32:13.374 slat (nsec): min=6198, max=54549, avg=11107.46, stdev=1957.97 00:32:13.374 clat (usec): min=6187, max=13867, avg=10533.30, stdev=876.47 00:32:13.374 lat (usec): min=6198, max=13876, avg=10544.40, stdev=876.50 00:32:13.374 clat percentiles (usec): 00:32:13.374 | 1.00th=[ 7898], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:32:13.374 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:32:13.374 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:32:13.374 | 99.00th=[12649], 99.50th=[12780], 99.90th=[13566], 99.95th=[13566], 00:32:13.374 | 99.99th=[13829] 00:32:13.374 bw ( KiB/s): min=35072, max=37888, per=33.55%, avg=36403.20, stdev=750.28, samples=20 00:32:13.374 iops : min= 274, max= 296, avg=284.40, stdev= 5.86, samples=20 00:32:13.374 lat (msec) : 10=24.28%, 20=75.72% 00:32:13.374 cpu : usr=93.94%, sys=5.71%, ctx=21, majf=0, minf=145 00:32:13.374 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.374 issued rwts: total=2846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.374 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:13.374 00:32:13.374 Run status group 0 (all jobs): 00:32:13.374 READ: bw=106MiB/s (111MB/s), 34.4MiB/s-36.2MiB/s (36.0MB/s-37.9MB/s), io=1065MiB (1116MB), run=10006-10047msec 00:32:13.374 07:51:15 -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:13.374 07:51:15 -- target/dif.sh@43 -- # local sub 00:32:13.374 07:51:15 -- target/dif.sh@45 -- # for sub in "$@" 00:32:13.374 07:51:15 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:13.374 07:51:15 -- target/dif.sh@36 -- # local sub_id=0 00:32:13.374 07:51:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:13.374 07:51:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.374 07:51:15 -- common/autotest_common.sh@10 -- # set +x 00:32:13.374 07:51:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.374 07:51:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:13.374 07:51:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.374 07:51:15 -- common/autotest_common.sh@10 -- # set +x 00:32:13.374 07:51:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.374 00:32:13.374 real 0m11.247s 00:32:13.374 user 0m35.401s 00:32:13.374 sys 0m1.955s 00:32:13.374 07:51:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:13.374 07:51:15 -- common/autotest_common.sh@10 -- # set +x 00:32:13.374 ************************************ 00:32:13.374 END TEST fio_dif_digest 00:32:13.374 ************************************ 00:32:13.374 07:51:15 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:13.374 07:51:15 -- target/dif.sh@147 -- # nvmftestfini 00:32:13.374 07:51:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:13.374 07:51:15 -- nvmf/common.sh@116 -- # sync 00:32:13.374 07:51:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:13.374 07:51:15 -- nvmf/common.sh@119 -- # set +e 00:32:13.374 07:51:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:13.374 07:51:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:13.374 rmmod nvme_tcp 00:32:13.374 rmmod nvme_fabrics 00:32:13.374 rmmod nvme_keyring 00:32:13.374 07:51:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:13.374 07:51:15 -- nvmf/common.sh@123 -- # set -e 00:32:13.374 07:51:15 -- nvmf/common.sh@124 -- # return 0 00:32:13.374 07:51:15 -- nvmf/common.sh@477 -- # '[' -n 121049 ']' 00:32:13.374 07:51:15 -- nvmf/common.sh@478 -- # killprocess 121049 00:32:13.374 07:51:15 -- common/autotest_common.sh@926 -- # '[' -z 121049 ']' 00:32:13.374 07:51:15 -- common/autotest_common.sh@930 -- # kill -0 121049 00:32:13.374 07:51:15 -- common/autotest_common.sh@931 -- # uname 00:32:13.374 07:51:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:13.374 07:51:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121049 00:32:13.374 07:51:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:13.374 07:51:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:13.374 07:51:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121049' 00:32:13.374 killing process with pid 121049 00:32:13.374 07:51:15 -- common/autotest_common.sh@945 -- # kill 121049 00:32:13.374 07:51:15 -- common/autotest_common.sh@950 -- # wait 121049 00:32:13.374 07:51:15 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:13.374 07:51:15 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:14.797 Waiting for block devices as requested 00:32:14.797 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:32:14.797 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:14.797 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:14.797 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:14.797 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:15.056 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:15.056 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:15.056 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:15.056 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:15.315 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:15.315 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:15.315 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:15.575 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:15.575 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:15.575 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:15.575 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:15.833 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:15.833 07:51:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:15.833 07:51:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:15.833 07:51:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:15.833 07:51:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:15.833 07:51:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.833 07:51:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:15.833 07:51:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.371 07:51:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:18.371 00:32:18.371 real 1m13.331s 00:32:18.371 user 7m8.484s 00:32:18.371 sys 0m19.450s 00:32:18.371 07:51:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:18.371 07:51:21 -- common/autotest_common.sh@10 -- # set +x 00:32:18.371 ************************************ 00:32:18.371 END TEST nvmf_dif 00:32:18.371 ************************************ 00:32:18.371 07:51:21 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:18.371 07:51:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:18.371 07:51:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:18.371 07:51:21 -- common/autotest_common.sh@10 -- # set +x 00:32:18.371 ************************************ 00:32:18.371 START TEST nvmf_abort_qd_sizes 00:32:18.371 ************************************ 00:32:18.371 07:51:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:18.371 * Looking for test storage... 00:32:18.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:18.371 07:51:21 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.371 07:51:21 -- nvmf/common.sh@7 -- # uname -s 00:32:18.371 07:51:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.371 07:51:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.371 07:51:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.371 07:51:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.371 07:51:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.371 07:51:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.371 07:51:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.371 07:51:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.371 07:51:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.371 07:51:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.371 07:51:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:18.371 07:51:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:18.371 07:51:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.371 07:51:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.371 07:51:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.371 07:51:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.371 07:51:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.371 07:51:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.371 07:51:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.371 07:51:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.371 07:51:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.371 07:51:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.371 07:51:21 -- paths/export.sh@5 -- # export PATH 00:32:18.371 07:51:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.371 07:51:21 -- nvmf/common.sh@46 -- # : 0 00:32:18.371 07:51:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:18.371 07:51:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:18.371 07:51:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:18.371 07:51:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.371 07:51:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.371 07:51:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:18.371 07:51:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:18.371 07:51:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:18.371 07:51:21 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:32:18.371 07:51:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:18.371 07:51:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.371 07:51:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:18.371 07:51:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:18.371 07:51:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:18.371 07:51:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.371 07:51:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:18.371 07:51:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.371 07:51:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:18.371 07:51:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:18.371 07:51:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:18.371 07:51:21 -- common/autotest_common.sh@10 -- # set +x 00:32:23.647 07:51:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:23.647 07:51:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:23.647 07:51:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:23.647 07:51:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:23.647 07:51:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:23.647 07:51:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:23.647 07:51:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:23.647 07:51:26 -- nvmf/common.sh@294 -- # net_devs=() 00:32:23.647 07:51:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:23.647 07:51:26 -- nvmf/common.sh@295 -- # e810=() 00:32:23.647 07:51:26 -- nvmf/common.sh@295 -- # local -ga e810 00:32:23.647 07:51:26 -- nvmf/common.sh@296 -- # x722=() 00:32:23.647 07:51:26 -- nvmf/common.sh@296 -- # local -ga x722 00:32:23.647 07:51:26 -- nvmf/common.sh@297 -- # mlx=() 00:32:23.647 07:51:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:23.647 07:51:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.647 07:51:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.647 07:51:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.647 07:51:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.647 07:51:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.647 07:51:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.647 07:51:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.647 07:51:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.647 07:51:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.647 07:51:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.647 07:51:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.647 07:51:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:23.647 07:51:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:23.647 07:51:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:23.647 07:51:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:23.647 07:51:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:23.647 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:23.647 07:51:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:23.647 07:51:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:23.647 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:23.647 07:51:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:23.647 07:51:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:23.647 07:51:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.647 07:51:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:23.647 07:51:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.647 07:51:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:23.647 Found net devices under 0000:af:00.0: cvl_0_0 00:32:23.647 07:51:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.647 07:51:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:23.647 07:51:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.647 07:51:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:23.647 07:51:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.647 07:51:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:23.647 Found net devices under 0000:af:00.1: cvl_0_1 00:32:23.647 07:51:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.647 07:51:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:23.647 07:51:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:23.647 07:51:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:23.647 07:51:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:23.647 07:51:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.647 07:51:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.647 07:51:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.647 07:51:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:23.647 07:51:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.647 07:51:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.647 07:51:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:23.647 07:51:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.647 07:51:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.647 07:51:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:23.647 07:51:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:23.647 07:51:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.647 07:51:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.647 07:51:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.647 07:51:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.647 07:51:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:23.647 07:51:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.647 07:51:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.647 07:51:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.647 07:51:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:23.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:32:23.647 00:32:23.647 --- 10.0.0.2 ping statistics --- 00:32:23.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.647 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:32:23.647 07:51:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:32:23.647 00:32:23.647 --- 10.0.0.1 ping statistics --- 00:32:23.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.647 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:32:23.647 07:51:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.647 07:51:26 -- nvmf/common.sh@410 -- # return 0 00:32:23.647 07:51:26 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:32:23.647 07:51:26 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:25.552 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:25.552 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:25.552 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:25.811 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:25.811 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:25.811 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:25.811 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:25.812 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:25.812 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:25.812 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:25.812 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:25.812 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:25.812 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:25.812 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:25.812 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:25.812 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:26.750 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:32:26.750 07:51:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:26.750 07:51:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:26.750 07:51:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:26.750 07:51:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:26.750 07:51:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:26.750 07:51:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:26.750 07:51:30 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:32:26.750 07:51:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:26.750 07:51:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:26.750 07:51:30 -- common/autotest_common.sh@10 -- # set +x 00:32:26.750 07:51:30 -- nvmf/common.sh@469 -- # nvmfpid=137951 00:32:26.750 07:51:30 -- nvmf/common.sh@470 -- # waitforlisten 137951 00:32:26.750 07:51:30 -- common/autotest_common.sh@819 -- # '[' -z 137951 ']' 00:32:26.750 07:51:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.750 07:51:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:26.750 07:51:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.750 07:51:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:26.750 07:51:30 -- common/autotest_common.sh@10 -- # set +x 00:32:26.750 07:51:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:26.750 [2024-10-07 07:51:30.687449] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:26.750 [2024-10-07 07:51:30.687494] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:26.750 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.009 [2024-10-07 07:51:30.746842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:27.009 [2024-10-07 07:51:30.826451] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:27.009 [2024-10-07 07:51:30.826557] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.009 [2024-10-07 07:51:30.826565] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.009 [2024-10-07 07:51:30.826571] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.009 [2024-10-07 07:51:30.826605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.009 [2024-10-07 07:51:30.826623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:27.009 [2024-10-07 07:51:30.826713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:27.009 [2024-10-07 07:51:30.826714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.578 07:51:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:27.578 07:51:31 -- common/autotest_common.sh@852 -- # return 0 00:32:27.578 07:51:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:27.578 07:51:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:27.578 07:51:31 -- common/autotest_common.sh@10 -- # set +x 00:32:27.578 07:51:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.578 07:51:31 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:27.578 07:51:31 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:32:27.578 07:51:31 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:32:27.578 07:51:31 -- scripts/common.sh@311 -- # local bdf bdfs 00:32:27.838 07:51:31 -- scripts/common.sh@312 -- # local nvmes 00:32:27.838 07:51:31 -- scripts/common.sh@314 -- # [[ -n 0000:5e:00.0 ]] 00:32:27.838 07:51:31 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:27.838 07:51:31 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:32:27.838 07:51:31 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:32:27.838 07:51:31 -- scripts/common.sh@322 -- # uname -s 00:32:27.838 07:51:31 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:32:27.838 07:51:31 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:32:27.838 07:51:31 -- scripts/common.sh@327 -- # (( 1 )) 00:32:27.838 07:51:31 -- scripts/common.sh@328 -- # printf '%s\n' 0000:5e:00.0 00:32:27.838 07:51:31 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:32:27.838 07:51:31 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:5e:00.0 00:32:27.838 07:51:31 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:32:27.838 07:51:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:27.838 07:51:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:27.838 07:51:31 -- common/autotest_common.sh@10 -- # set +x 00:32:27.838 ************************************ 00:32:27.838 START TEST spdk_target_abort 00:32:27.838 ************************************ 00:32:27.838 07:51:31 -- common/autotest_common.sh@1104 -- # spdk_target 00:32:27.838 07:51:31 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:27.838 07:51:31 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:27.838 07:51:31 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:32:27.838 07:51:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:27.838 07:51:31 -- common/autotest_common.sh@10 -- # set +x 00:32:31.128 spdk_targetn1 00:32:31.128 07:51:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:31.128 07:51:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.128 07:51:34 -- common/autotest_common.sh@10 -- # set +x 00:32:31.128 [2024-10-07 07:51:34.396218] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.128 07:51:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:32:31.128 07:51:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.128 07:51:34 -- common/autotest_common.sh@10 -- # set +x 00:32:31.128 07:51:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:32:31.128 07:51:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.128 07:51:34 -- common/autotest_common.sh@10 -- # set +x 00:32:31.128 07:51:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:32:31.128 07:51:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.128 07:51:34 -- common/autotest_common.sh@10 -- # set +x 00:32:31.128 [2024-10-07 07:51:34.429090] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.128 07:51:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:31.128 07:51:34 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:31.128 EAL: No free 2048 kB hugepages reported on node 1 00:32:33.662 Initializing NVMe Controllers 00:32:33.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:33.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:33.662 Initialization complete. Launching workers. 00:32:33.662 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 13307, failed: 0 00:32:33.662 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1416, failed to submit 11891 00:32:33.662 success 760, unsuccess 656, failed 0 00:32:33.662 07:51:37 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:33.662 07:51:37 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:33.921 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.210 Initializing NVMe Controllers 00:32:37.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:37.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:37.210 Initialization complete. Launching workers. 00:32:37.210 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8626, failed: 0 00:32:37.210 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1231, failed to submit 7395 00:32:37.210 success 363, unsuccess 868, failed 0 00:32:37.210 07:51:40 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:37.210 07:51:40 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:37.210 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.499 Initializing NVMe Controllers 00:32:40.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:40.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:40.499 Initialization complete. Launching workers. 00:32:40.499 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 39243, failed: 0 00:32:40.499 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2889, failed to submit 36354 00:32:40.499 success 602, unsuccess 2287, failed 0 00:32:40.499 07:51:44 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:32:40.499 07:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:40.499 07:51:44 -- common/autotest_common.sh@10 -- # set +x 00:32:40.499 07:51:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:40.499 07:51:44 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:40.499 07:51:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:40.499 07:51:44 -- common/autotest_common.sh@10 -- # set +x 00:32:41.878 07:51:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:41.878 07:51:45 -- target/abort_qd_sizes.sh@62 -- # killprocess 137951 00:32:41.878 07:51:45 -- common/autotest_common.sh@926 -- # '[' -z 137951 ']' 00:32:41.878 07:51:45 -- common/autotest_common.sh@930 -- # kill -0 137951 00:32:41.878 07:51:45 -- common/autotest_common.sh@931 -- # uname 00:32:41.878 07:51:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:41.878 07:51:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137951 00:32:41.878 07:51:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:41.878 07:51:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:41.878 07:51:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137951' 00:32:41.878 killing process with pid 137951 00:32:41.878 07:51:45 -- common/autotest_common.sh@945 -- # kill 137951 00:32:41.878 07:51:45 -- common/autotest_common.sh@950 -- # wait 137951 00:32:41.878 00:32:41.878 real 0m14.107s 00:32:41.878 user 0m56.012s 00:32:41.878 sys 0m2.405s 00:32:41.878 07:51:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:41.878 07:51:45 -- common/autotest_common.sh@10 -- # set +x 00:32:41.878 ************************************ 00:32:41.878 END TEST spdk_target_abort 00:32:41.878 ************************************ 00:32:41.878 07:51:45 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:32:41.878 07:51:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:41.878 07:51:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:41.879 07:51:45 -- common/autotest_common.sh@10 -- # set +x 00:32:41.879 ************************************ 00:32:41.879 START TEST kernel_target_abort 00:32:41.879 ************************************ 00:32:41.879 07:51:45 -- common/autotest_common.sh@1104 -- # kernel_target 00:32:41.879 07:51:45 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:32:41.879 07:51:45 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:32:41.879 07:51:45 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:32:41.879 07:51:45 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:32:41.879 07:51:45 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:32:41.879 07:51:45 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:41.879 07:51:45 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:41.879 07:51:45 -- nvmf/common.sh@627 -- # local block nvme 00:32:41.879 07:51:45 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:32:41.879 07:51:45 -- nvmf/common.sh@630 -- # modprobe nvmet 00:32:41.879 07:51:45 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:41.879 07:51:45 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:44.418 Waiting for block devices as requested 00:32:44.418 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:32:44.675 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:44.675 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:44.675 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:44.675 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:44.934 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:44.934 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:44.934 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:44.934 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:45.193 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:45.193 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:45.193 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:45.452 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:45.452 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:45.452 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:45.452 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:45.711 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:45.711 07:51:49 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:45.711 07:51:49 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:45.711 07:51:49 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:32:45.712 07:51:49 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:32:45.712 07:51:49 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:45.712 No valid GPT data, bailing 00:32:45.712 07:51:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:45.712 07:51:49 -- scripts/common.sh@393 -- # pt= 00:32:45.712 07:51:49 -- scripts/common.sh@394 -- # return 1 00:32:45.712 07:51:49 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:32:45.712 07:51:49 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:32:45.712 07:51:49 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:45.712 07:51:49 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:45.712 07:51:49 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:45.712 07:51:49 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:32:45.712 07:51:49 -- nvmf/common.sh@654 -- # echo 1 00:32:45.712 07:51:49 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:32:45.712 07:51:49 -- nvmf/common.sh@656 -- # echo 1 00:32:45.712 07:51:49 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:32:45.712 07:51:49 -- nvmf/common.sh@663 -- # echo tcp 00:32:45.712 07:51:49 -- nvmf/common.sh@664 -- # echo 4420 00:32:45.712 07:51:49 -- nvmf/common.sh@665 -- # echo ipv4 00:32:45.712 07:51:49 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:45.712 07:51:49 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:32:45.712 00:32:45.712 Discovery Log Number of Records 2, Generation counter 2 00:32:45.712 =====Discovery Log Entry 0====== 00:32:45.712 trtype: tcp 00:32:45.712 adrfam: ipv4 00:32:45.712 subtype: current discovery subsystem 00:32:45.712 treq: not specified, sq flow control disable supported 00:32:45.712 portid: 1 00:32:45.712 trsvcid: 4420 00:32:45.712 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:45.712 traddr: 10.0.0.1 00:32:45.712 eflags: none 00:32:45.712 sectype: none 00:32:45.712 =====Discovery Log Entry 1====== 00:32:45.712 trtype: tcp 00:32:45.712 adrfam: ipv4 00:32:45.712 subtype: nvme subsystem 00:32:45.712 treq: not specified, sq flow control disable supported 00:32:45.712 portid: 1 00:32:45.712 trsvcid: 4420 00:32:45.712 subnqn: kernel_target 00:32:45.712 traddr: 10.0.0.1 00:32:45.712 eflags: none 00:32:45.712 sectype: none 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:45.712 07:51:49 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:45.971 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.264 Initializing NVMe Controllers 00:32:49.264 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:49.264 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:49.264 Initialization complete. Launching workers. 00:32:49.264 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 77741, failed: 0 00:32:49.264 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 77741, failed to submit 0 00:32:49.264 success 0, unsuccess 77741, failed 0 00:32:49.264 07:51:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:49.264 07:51:52 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:49.264 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.554 Initializing NVMe Controllers 00:32:52.554 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:52.554 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:52.554 Initialization complete. Launching workers. 00:32:52.554 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 130670, failed: 0 00:32:52.554 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 32886, failed to submit 97784 00:32:52.554 success 0, unsuccess 32886, failed 0 00:32:52.554 07:51:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:52.554 07:51:55 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:52.555 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.092 Initializing NVMe Controllers 00:32:55.092 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:55.092 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:55.092 Initialization complete. Launching workers. 00:32:55.092 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 126222, failed: 0 00:32:55.092 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31554, failed to submit 94668 00:32:55.092 success 0, unsuccess 31554, failed 0 00:32:55.092 07:51:58 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:32:55.092 07:51:58 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:32:55.092 07:51:58 -- nvmf/common.sh@677 -- # echo 0 00:32:55.092 07:51:58 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:32:55.092 07:51:58 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:55.092 07:51:58 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:55.092 07:51:58 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:55.092 07:51:58 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:32:55.092 07:51:58 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:32:55.092 00:32:55.092 real 0m13.270s 00:32:55.092 user 0m6.677s 00:32:55.092 sys 0m3.270s 00:32:55.092 07:51:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:55.092 07:51:58 -- common/autotest_common.sh@10 -- # set +x 00:32:55.092 ************************************ 00:32:55.092 END TEST kernel_target_abort 00:32:55.092 ************************************ 00:32:55.092 07:51:59 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:32:55.092 07:51:59 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:32:55.092 07:51:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:55.092 07:51:59 -- nvmf/common.sh@116 -- # sync 00:32:55.092 07:51:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:55.092 07:51:59 -- nvmf/common.sh@119 -- # set +e 00:32:55.092 07:51:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:55.092 07:51:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:55.092 rmmod nvme_tcp 00:32:55.092 rmmod nvme_fabrics 00:32:55.092 rmmod nvme_keyring 00:32:55.352 07:51:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:55.352 07:51:59 -- nvmf/common.sh@123 -- # set -e 00:32:55.352 07:51:59 -- nvmf/common.sh@124 -- # return 0 00:32:55.352 07:51:59 -- nvmf/common.sh@477 -- # '[' -n 137951 ']' 00:32:55.352 07:51:59 -- nvmf/common.sh@478 -- # killprocess 137951 00:32:55.352 07:51:59 -- common/autotest_common.sh@926 -- # '[' -z 137951 ']' 00:32:55.352 07:51:59 -- common/autotest_common.sh@930 -- # kill -0 137951 00:32:55.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (137951) - No such process 00:32:55.352 07:51:59 -- common/autotest_common.sh@953 -- # echo 'Process with pid 137951 is not found' 00:32:55.352 Process with pid 137951 is not found 00:32:55.352 07:51:59 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:55.352 07:51:59 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:57.890 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:32:57.890 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:57.890 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:57.890 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:57.890 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:57.890 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:57.890 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:57.890 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:57.890 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:57.890 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:58.149 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:58.149 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:58.149 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:58.149 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:58.149 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:58.149 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:58.149 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:58.149 07:52:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:58.149 07:52:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:58.149 07:52:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:58.149 07:52:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:58.149 07:52:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.149 07:52:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:58.149 07:52:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.684 07:52:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:00.684 00:33:00.684 real 0m42.316s 00:33:00.684 user 1m6.573s 00:33:00.684 sys 0m13.453s 00:33:00.684 07:52:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:00.684 07:52:04 -- common/autotest_common.sh@10 -- # set +x 00:33:00.684 ************************************ 00:33:00.684 END TEST nvmf_abort_qd_sizes 00:33:00.684 ************************************ 00:33:00.684 07:52:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:00.684 07:52:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:00.684 07:52:04 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:00.684 07:52:04 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:33:00.684 07:52:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:00.684 07:52:04 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:00.684 07:52:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:00.684 07:52:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:00.684 07:52:04 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:00.684 07:52:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:00.684 07:52:04 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:00.684 07:52:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:00.684 07:52:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:00.684 07:52:04 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:00.684 07:52:04 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:33:00.684 07:52:04 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:33:00.684 07:52:04 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:33:00.684 07:52:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:00.684 07:52:04 -- common/autotest_common.sh@10 -- # set +x 00:33:00.684 07:52:04 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:33:00.684 07:52:04 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:33:00.684 07:52:04 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:33:00.684 07:52:04 -- common/autotest_common.sh@10 -- # set +x 00:33:04.878 INFO: APP EXITING 00:33:04.878 INFO: killing all VMs 00:33:04.878 INFO: killing vhost app 00:33:04.878 INFO: EXIT DONE 00:33:07.414 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:33:07.414 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:33:07.414 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:33:07.414 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:33:07.414 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:33:07.414 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:33:07.414 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:33:07.414 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:33:07.414 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:33:07.414 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:33:07.414 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:33:07.414 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:33:07.414 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:33:07.414 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:33:07.414 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:33:07.414 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:33:07.414 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:33:10.729 Cleaning 00:33:10.729 Removing: /var/run/dpdk/spdk0/config 00:33:10.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:10.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:10.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:10.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:10.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:10.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:10.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:10.729 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:10.729 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:10.729 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:10.729 Removing: /var/run/dpdk/spdk1/config 00:33:10.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:10.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:10.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:10.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:10.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:10.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:10.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:10.729 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:10.729 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:10.729 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:10.729 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:10.729 Removing: /var/run/dpdk/spdk2/config 00:33:10.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:10.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:10.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:10.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:10.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:10.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:10.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:10.729 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:10.729 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:10.729 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:10.729 Removing: /var/run/dpdk/spdk3/config 00:33:10.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:10.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:10.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:10.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:10.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:10.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:10.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:10.729 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:10.729 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:10.729 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:10.729 Removing: /var/run/dpdk/spdk4/config 00:33:10.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:10.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:10.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:10.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:10.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:10.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:10.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:10.729 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:10.729 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:10.729 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:10.729 Removing: /dev/shm/bdev_svc_trace.1 00:33:10.729 Removing: /dev/shm/nvmf_trace.0 00:33:10.729 Removing: /dev/shm/spdk_tgt_trace.pid3940086 00:33:10.729 Removing: /var/run/dpdk/spdk0 00:33:10.729 Removing: /var/run/dpdk/spdk1 00:33:10.729 Removing: /var/run/dpdk/spdk2 00:33:10.729 Removing: /var/run/dpdk/spdk3 00:33:10.729 Removing: /var/run/dpdk/spdk4 00:33:10.729 Removing: /var/run/dpdk/spdk_pid100187 00:33:10.729 Removing: /var/run/dpdk/spdk_pid100426 00:33:10.729 Removing: /var/run/dpdk/spdk_pid106199 00:33:10.729 Removing: /var/run/dpdk/spdk_pid106466 00:33:10.729 Removing: /var/run/dpdk/spdk_pid108662 00:33:10.729 Removing: /var/run/dpdk/spdk_pid116330 00:33:10.729 Removing: /var/run/dpdk/spdk_pid116335 00:33:10.729 Removing: /var/run/dpdk/spdk_pid11988 00:33:10.729 Removing: /var/run/dpdk/spdk_pid121314 00:33:10.729 Removing: /var/run/dpdk/spdk_pid123253 00:33:10.729 Removing: /var/run/dpdk/spdk_pid125210 00:33:10.729 Removing: /var/run/dpdk/spdk_pid126248 00:33:10.729 Removing: /var/run/dpdk/spdk_pid128278 00:33:10.729 Removing: /var/run/dpdk/spdk_pid129593 00:33:10.729 Removing: /var/run/dpdk/spdk_pid138652 00:33:10.729 Removing: /var/run/dpdk/spdk_pid139110 00:33:10.729 Removing: /var/run/dpdk/spdk_pid139737 00:33:10.729 Removing: /var/run/dpdk/spdk_pid142032 00:33:10.729 Removing: /var/run/dpdk/spdk_pid142493 00:33:10.729 Removing: /var/run/dpdk/spdk_pid142957 00:33:10.729 Removing: /var/run/dpdk/spdk_pid16088 00:33:10.729 Removing: /var/run/dpdk/spdk_pid21998 00:33:10.729 Removing: /var/run/dpdk/spdk_pid23291 00:33:10.729 Removing: /var/run/dpdk/spdk_pid24741 00:33:10.729 Removing: /var/run/dpdk/spdk_pid29074 00:33:10.729 Removing: /var/run/dpdk/spdk_pid33061 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3937826 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3939032 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3940086 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3940749 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3942247 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3943631 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3943914 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3944305 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3944938 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3945370 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3945623 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3945868 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3946135 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3946924 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3950037 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3950314 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3950573 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3950749 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3951079 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3951300 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3951788 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3952016 00:33:10.729 Removing: /var/run/dpdk/spdk_pid3952277 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3952402 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3952544 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3952774 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3953321 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3953543 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3953849 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3954102 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3954134 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3954194 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3954426 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3954670 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3954898 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3955148 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3955374 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3955615 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3955852 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3956093 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3956325 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3956572 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3956804 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3957045 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3957279 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3957520 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3957751 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3957998 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3958224 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3958467 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3958696 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3958949 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3959175 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3959419 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3959654 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3959897 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3960123 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3960372 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3960601 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3960842 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3961076 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3961317 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3961553 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3961800 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3962032 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3962283 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3962522 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3962787 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3963004 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3963271 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3963521 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3963799 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3963999 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3964300 00:33:10.730 Removing: /var/run/dpdk/spdk_pid3967900 00:33:10.730 Removing: /var/run/dpdk/spdk_pid4051518 00:33:10.730 Removing: /var/run/dpdk/spdk_pid4055712 00:33:10.730 Removing: /var/run/dpdk/spdk_pid40560 00:33:10.730 Removing: /var/run/dpdk/spdk_pid40563 00:33:10.730 Removing: /var/run/dpdk/spdk_pid4066117 00:33:10.730 Removing: /var/run/dpdk/spdk_pid4071457 00:33:10.730 Removing: /var/run/dpdk/spdk_pid4075408 00:33:10.730 Removing: /var/run/dpdk/spdk_pid4076081 00:33:10.730 Removing: /var/run/dpdk/spdk_pid4084505 00:33:10.730 Removing: /var/run/dpdk/spdk_pid4084752 00:33:10.730 Removing: /var/run/dpdk/spdk_pid4088966 00:33:10.730 Removing: /var/run/dpdk/spdk_pid4094750 00:33:10.730 Removing: /var/run/dpdk/spdk_pid4097322 00:33:10.730 Removing: /var/run/dpdk/spdk_pid4107642 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4117115 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4118850 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4119824 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4136254 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4140002 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4144443 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4146205 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4148075 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4148306 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4148544 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4148773 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4149295 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4151118 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4152210 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4152825 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4158380 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4164219 00:33:11.043 Removing: /var/run/dpdk/spdk_pid4169185 00:33:11.043 Removing: /var/run/dpdk/spdk_pid45210 00:33:11.043 Removing: /var/run/dpdk/spdk_pid45441 00:33:11.043 Removing: /var/run/dpdk/spdk_pid45628 00:33:11.043 Removing: /var/run/dpdk/spdk_pid45904 00:33:11.043 Removing: /var/run/dpdk/spdk_pid46015 00:33:11.043 Removing: /var/run/dpdk/spdk_pid47346 00:33:11.043 Removing: /var/run/dpdk/spdk_pid49223 00:33:11.043 Removing: /var/run/dpdk/spdk_pid51200 00:33:11.043 Removing: /var/run/dpdk/spdk_pid52809 00:33:11.043 Removing: /var/run/dpdk/spdk_pid54582 00:33:11.043 Removing: /var/run/dpdk/spdk_pid56170 00:33:11.043 Removing: /var/run/dpdk/spdk_pid62174 00:33:11.043 Removing: /var/run/dpdk/spdk_pid62715 00:33:11.043 Removing: /var/run/dpdk/spdk_pid64472 00:33:11.043 Removing: /var/run/dpdk/spdk_pid65335 00:33:11.043 Removing: /var/run/dpdk/spdk_pid71158 00:33:11.043 Removing: /var/run/dpdk/spdk_pid73888 00:33:11.043 Removing: /var/run/dpdk/spdk_pid79220 00:33:11.043 Removing: /var/run/dpdk/spdk_pid84911 00:33:11.043 Removing: /var/run/dpdk/spdk_pid91151 00:33:11.043 Removing: /var/run/dpdk/spdk_pid91779 00:33:11.043 Removing: /var/run/dpdk/spdk_pid92465 00:33:11.043 Removing: /var/run/dpdk/spdk_pid93156 00:33:11.043 Removing: /var/run/dpdk/spdk_pid93898 00:33:11.043 Removing: /var/run/dpdk/spdk_pid94590 00:33:11.043 Removing: /var/run/dpdk/spdk_pid95278 00:33:11.043 Removing: /var/run/dpdk/spdk_pid95854 00:33:11.043 Clean 00:33:11.043 killing process with pid 3893774 00:33:19.165 killing process with pid 3893771 00:33:19.165 killing process with pid 3893773 00:33:19.165 killing process with pid 3893772 00:33:19.165 07:52:22 -- common/autotest_common.sh@1436 -- # return 0 00:33:19.165 07:52:22 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:33:19.165 07:52:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:19.165 07:52:22 -- common/autotest_common.sh@10 -- # set +x 00:33:19.165 07:52:22 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:33:19.165 07:52:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:19.166 07:52:22 -- common/autotest_common.sh@10 -- # set +x 00:33:19.166 07:52:22 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:19.166 07:52:22 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:19.166 07:52:22 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:19.166 07:52:22 -- spdk/autotest.sh@394 -- # hash lcov 00:33:19.166 07:52:22 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:19.166 07:52:22 -- spdk/autotest.sh@396 -- # hostname 00:33:19.166 07:52:22 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:19.166 geninfo: WARNING: invalid characters removed from testname! 00:33:37.259 07:52:40 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:39.165 07:52:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:40.544 07:52:44 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:42.448 07:52:45 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:43.827 07:52:47 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:45.205 07:52:49 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:47.112 07:52:50 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:47.112 07:52:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:47.113 07:52:50 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:47.113 07:52:50 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.113 07:52:50 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.113 07:52:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.113 07:52:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.113 07:52:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.113 07:52:50 -- paths/export.sh@5 -- $ export PATH 00:33:47.113 07:52:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.113 07:52:50 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:47.113 07:52:50 -- common/autobuild_common.sh@440 -- $ date +%s 00:33:47.113 07:52:50 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1728280370.XXXXXX 00:33:47.113 07:52:50 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1728280370.fJjXZr 00:33:47.113 07:52:50 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:33:47.113 07:52:50 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:33:47.113 07:52:50 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:47.113 07:52:50 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:47.113 07:52:50 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:47.113 07:52:50 -- common/autobuild_common.sh@456 -- $ get_config_params 00:33:47.113 07:52:50 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:47.113 07:52:50 -- common/autotest_common.sh@10 -- $ set +x 00:33:47.113 07:52:50 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:33:47.113 07:52:50 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:33:47.113 07:52:50 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:47.113 07:52:50 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:47.113 07:52:50 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:47.113 07:52:50 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:47.113 07:52:50 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:47.113 07:52:50 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:47.113 07:52:50 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:47.113 07:52:50 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:47.113 07:52:50 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:47.113 + [[ -n 3850911 ]] 00:33:47.113 + sudo kill 3850911 00:33:47.138 [Pipeline] } 00:33:47.153 [Pipeline] // stage 00:33:47.158 [Pipeline] } 00:33:47.172 [Pipeline] // timeout 00:33:47.178 [Pipeline] } 00:33:47.192 [Pipeline] // catchError 00:33:47.197 [Pipeline] } 00:33:47.211 [Pipeline] // wrap 00:33:47.217 [Pipeline] } 00:33:47.230 [Pipeline] // catchError 00:33:47.254 [Pipeline] stage 00:33:47.256 [Pipeline] { (Epilogue) 00:33:47.269 [Pipeline] catchError 00:33:47.271 [Pipeline] { 00:33:47.284 [Pipeline] echo 00:33:47.285 Cleanup processes 00:33:47.291 [Pipeline] sh 00:33:47.577 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:47.577 156187 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:47.591 [Pipeline] sh 00:33:47.876 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:47.876 ++ grep -v 'sudo pgrep' 00:33:47.876 ++ awk '{print $1}' 00:33:47.876 + sudo kill -9 00:33:47.876 + true 00:33:47.888 [Pipeline] sh 00:33:48.171 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:58.162 [Pipeline] sh 00:33:58.439 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:58.439 Artifacts sizes are good 00:33:58.448 [Pipeline] archiveArtifacts 00:33:58.454 Archiving artifacts 00:33:58.644 [Pipeline] sh 00:33:58.927 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:58.941 [Pipeline] cleanWs 00:33:58.951 [WS-CLEANUP] Deleting project workspace... 00:33:58.951 [WS-CLEANUP] Deferred wipeout is used... 00:33:58.958 [WS-CLEANUP] done 00:33:58.959 [Pipeline] } 00:33:58.976 [Pipeline] // catchError 00:33:58.986 [Pipeline] sh 00:33:59.292 + logger -p user.info -t JENKINS-CI 00:33:59.341 [Pipeline] } 00:33:59.353 [Pipeline] // stage 00:33:59.358 [Pipeline] } 00:33:59.370 [Pipeline] // node 00:33:59.374 [Pipeline] End of Pipeline 00:33:59.400 Finished: SUCCESS